Rank Reduction with Convex Constraints
(2017) Abstract
 This thesis addresses problems which require lowrank solutions under convex constraints. In particular, the focus lies on model reduction of positive systems, as well as finite dimensional optimization problems that are convex, apart from a lowrank constraint.
Traditional model reduction techniques try to minimize the error between the original and the reduced system. Typically, the resulting reduced models, however, no longer fulfill physically meaningful constraints. This thesis considers the problem of model reduction with internal and external positivity constraints. Both problems are solved by means of balanced truncation. While internal positivity is shown to be preserved by a symmetry property; external positivity... (More)  This thesis addresses problems which require lowrank solutions under convex constraints. In particular, the focus lies on model reduction of positive systems, as well as finite dimensional optimization problems that are convex, apart from a lowrank constraint.
Traditional model reduction techniques try to minimize the error between the original and the reduced system. Typically, the resulting reduced models, however, no longer fulfill physically meaningful constraints. This thesis considers the problem of model reduction with internal and external positivity constraints. Both problems are solved by means of balanced truncation. While internal positivity is shown to be preserved by a symmetry property; external positivity preservation is accomplished by deriving a modified balancing approach based on ellipsoidal cone invariance.
In essence, positivity preserving model reduction attempts to find an infinite dimensional lowrank approximation that preserves nonnegativity, as well as Hankel structure. Due to the nonconvexity of the lowrank constraint, this problem is even challenging in a finite dimensional setting. In addition to model reduction, the present work also considers such finite dimensional lowrank optimization problems with convex constraints. These problems frequently appear in applications such as image compression, multivariate linear regression, matrix completion and many more.
The main idea of this thesis is to derive the largest convex minorizers of rankconstrained unitarily invariant norms. These minorizers can be used to construct optimal convex relaxations for the original nonconvex problem. Unlike other methods such as nuclear norm regularization, this approach benefits from having verifiable a posterior conditions for which a solution to the convex relaxation and the corresponding nonconvex problem coincide. It is shown that this applies to various numerical examples of wellknown lowrank optimization problems. In particular, the proposed convex relaxation performs significantly better than nuclear norm regularization. Moreover, it can be observed that a careful choice among the proposed convex relaxations may have a tremendous positive impact on matrix completion.
Computational tractability of the proposed approach is accomplished in two ways. First, the considered relaxations are shown to be representable by semidefinite programs. Second, it is shown how to compute the proximal mappings, for both, the convex relaxations, as well as the nonconvex problem. This makes it possible to apply first order method such as socalled DouglasRachford splitting. In addition to the convex case, where global convergence of this algorithm is guaranteed, conditions for local convergence in the nonconvex setting are presented.
Finally, it is shown that the findings of this thesis also extend to the general class of socalled atomic norms that allow us to cover other nonconvex constraints. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/record/54cb814f59fe4bc9a7ef773cbcf06889
 author
 Grussler, Christian ^{LU}
 supervisor

 Anders Rantzer ^{LU}
 Pontus Giselsson ^{LU}
 Andrey Ghulchak ^{LU}
 opponent

 Professor Sepulchre, Rudolf, University of Cambridge
 organization
 publishing date
 20170203
 type
 Thesis
 publication status
 published
 subject
 keywords
 lowrank approximation, model reduction, nonconvex optimization, DouglasRachford, matrix completion, overlapping norm, ksupport norm, atomic norm
 pages
 181 pages
 publisher
 Department of Automatic Control, Lund Institute of Technology, Lund University
 defense location
 M:B
 defense date
 20170203 13:00
 ISBN
 9789177530817
 9789177530800
 language
 English
 LU publication?
 yes
 id
 54cb814f59fe4bc9a7ef773cbcf06889
 date added to LUP
 20170109 03:58:38
 date last changed
 20170109 09:43:27
@phdthesis{54cb814f59fe4bc9a7ef773cbcf06889, abstract = {This thesis addresses problems which require lowrank solutions under convex constraints. In particular, the focus lies on model reduction of positive systems, as well as finite dimensional optimization problems that are convex, apart from a lowrank constraint. <br/><br/>Traditional model reduction techniques try to minimize the error between the original and the reduced system. Typically, the resulting reduced models, however, no longer fulfill physically meaningful constraints. This thesis considers the problem of model reduction with internal and external positivity constraints. Both problems are solved by means of balanced truncation. While internal positivity is shown to be preserved by a symmetry property; external positivity preservation is accomplished by deriving a modified balancing approach based on ellipsoidal cone invariance.<br/><br/>In essence, positivity preserving model reduction attempts to find an infinite dimensional lowrank approximation that preserves nonnegativity, as well as Hankel structure. Due to the nonconvexity of the lowrank constraint, this problem is even challenging in a finite dimensional setting. In addition to model reduction, the present work also considers such finite dimensional lowrank optimization problems with convex constraints. These problems frequently appear in applications such as image compression, multivariate linear regression, matrix completion and many more. <br/><br/>The main idea of this thesis is to derive the largest convex minorizers of rankconstrained unitarily invariant norms. These minorizers can be used to construct optimal convex relaxations for the original nonconvex problem. Unlike other methods such as nuclear norm regularization, this approach benefits from having verifiable a posterior conditions for which a solution to the convex relaxation and the corresponding nonconvex problem coincide. It is shown that this applies to various numerical examples of wellknown lowrank optimization problems. In particular, the proposed convex relaxation performs significantly better than nuclear norm regularization. Moreover, it can be observed that a careful choice among the proposed convex relaxations may have a tremendous positive impact on matrix completion. <br/><br/>Computational tractability of the proposed approach is accomplished in two ways. First, the considered relaxations are shown to be representable by semidefinite programs. Second, it is shown how to compute the proximal mappings, for both, the convex relaxations, as well as the nonconvex problem. This makes it possible to apply first order method such as socalled DouglasRachford splitting. In addition to the convex case, where global convergence of this algorithm is guaranteed, conditions for local convergence in the nonconvex setting are presented. <br/><br/>Finally, it is shown that the findings of this thesis also extend to the general class of socalled atomic norms that allow us to cover other nonconvex constraints.}, author = {Grussler, Christian}, isbn = {9789177530817}, keyword = {lowrank approximation,model reduction,nonconvex optimization,DouglasRachford,matrix completion,overlapping norm,ksupport norm,atomic norm}, language = {eng}, month = {02}, pages = {181}, publisher = {Department of Automatic Control, Lund Institute of Technology, Lund University}, school = {Lund University}, title = {Rank Reduction with Convex Constraints}, year = {2017}, }