Convergence Analysis and Improvements for Projection Algorithms and Splitting Methods
(2021) TFRT(1130). Abstract
 Nonsmooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These firstorder optimization algorithms are often simple, well suited to solve largescale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the periteration cost is usually low, the number of iterations can become prohibitively large for... (More)
 Nonsmooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These firstorder optimization algorithms are often simple, well suited to solve largescale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the periteration cost is usually low, the number of iterations can become prohibitively large for illconditioned problems, especially if a high accuracy solution is sought.
In this thesis, a few methods for alleviating this slow convergence are studied, which can be divided into two main approaches. The first are heuristic methods that can be applied to a range of fixedpoint algorithms. They are based on understanding typical behavior of these algorithms. While these methods are shown to converge, they come with no guarantees on improved convergence rates.
The other approach studies the theoretical rates of a class of projection methods that are used to solve convex feasibility problems. These are problems where the goal is to find a point in the intersection of two, or possibly more, convex sets. A study of how the parameters in the algorithm affect the theoretical convergence rate is presented, as well as how they can be chosen to optimize this rate. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/fca6676b98154fbebeaeb74cd7333fa2
 author
 Fält, Mattias ^{LU}
 supervisor

 Pontus Giselsson ^{LU}
 Bo Bernhardsson ^{LU}
 Sebastian Banert ^{LU}
 opponent

 Professor Luke, Russell, University of Göttingen, Germany
 organization
 publishing date
 20210202
 type
 Thesis
 publication status
 published
 subject
 keywords
 Firstorder methods, Convex Optimization, Nonsmooth Optimization, Largescale Optimization, Iterative Methods, DouglasRachford splitting, Line Search
 volume
 TFRT
 issue
 1130
 pages
 201 pages
 publisher
 Department of Automatic Control, Lund University
 defense location
 Lecture hall A, building KC4, Naturvetarvägen 18, Lund University, Faculty of Engineering LTH, Lund Join via Zoom: https://luse.zoom.us/j/62449887146
 defense date
 20210226 10:15:00
 ISSN
 02805316
 02805316
 ISBN
 9789178957644
 9789178957637
 project
 LargeScale Optimization
 language
 English
 LU publication?
 yes
 id
 fca6676b98154fbebeaeb74cd7333fa2
 date added to LUP
 20210127 14:37:29
 date last changed
 20210212 12:36:47
@phdthesis{fca6676b98154fbebeaeb74cd7333fa2, abstract = {Nonsmooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These firstorder optimization algorithms are often simple, well suited to solve largescale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the periteration cost is usually low, the number of iterations can become prohibitively large for illconditioned problems, especially if a high accuracy solution is sought.<br/><br/>In this thesis, a few methods for alleviating this slow convergence are studied, which can be divided into two main approaches. The first are heuristic methods that can be applied to a range of fixedpoint algorithms. They are based on understanding typical behavior of these algorithms. While these methods are shown to converge, they come with no guarantees on improved convergence rates.<br/><br/>The other approach studies the theoretical rates of a class of projection methods that are used to solve convex feasibility problems. These are problems where the goal is to find a point in the intersection of two, or possibly more, convex sets. A study of how the parameters in the algorithm affect the theoretical convergence rate is presented, as well as how they can be chosen to optimize this rate.}, author = {Fält, Mattias}, isbn = {9789178957644}, issn = {02805316}, language = {eng}, month = {02}, number = {1130}, publisher = {Department of Automatic Control, Lund University}, school = {Lund University}, title = {Convergence Analysis and Improvements for Projection Algorithms and Splitting Methods}, url = {https://lup.lub.lu.se/search/ws/files/90535299/thesis.pdf}, volume = {TFRT}, year = {2021}, }