Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Convergence Analysis and Improvements for Projection Algorithms and Splitting Methods

Fält, Mattias LU (2021) TFRT(1130).
Abstract
Non-smooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These first-order optimization algorithms are often simple, well suited to solve large-scale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the per-iteration cost is usually low, the number of iterations can become prohibitively large for... (More)
Non-smooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These first-order optimization algorithms are often simple, well suited to solve large-scale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the per-iteration cost is usually low, the number of iterations can become prohibitively large for ill-conditioned problems, especially if a high accuracy solution is sought.

In this thesis, a few methods for alleviating this slow convergence are studied, which can be divided into two main approaches. The first are heuristic methods that can be applied to a range of fixed-point algorithms. They are based on understanding typical behavior of these algorithms. While these methods are shown to converge, they come with no guarantees on improved convergence rates.

The other approach studies the theoretical rates of a class of projection methods that are used to solve convex feasibility problems. These are problems where the goal is to find a point in the intersection of two, or possibly more, convex sets. A study of how the parameters in the algorithm affect the theoretical convergence rate is presented, as well as how they can be chosen to optimize this rate. (Less)
Please use this url to cite or link to this publication:
author
supervisor
opponent
  • Professor Luke, Russell, University of Göttingen, Germany
organization
publishing date
type
Thesis
publication status
published
subject
keywords
First-order methods, Convex Optimization, Nonsmooth Optimization, Large-scale Optimization, Iterative Methods, Douglas-Rachford splitting, Line Search
volume
TFRT
issue
1130
pages
201 pages
publisher
Department of Automatic Control, Lund University
defense location
Lecture hall A, building KC4, Naturvetarvägen 18, Lund University, Faculty of Engineering LTH, Lund Join via Zoom: https://lu-se.zoom.us/j/62449887146
defense date
2021-02-26 10:15:00
ISSN
0280-5316
0280-5316
ISBN
978-91-7895-764-4
978-91-7895-763-7
project
Large-Scale Optimization
language
English
LU publication?
yes
id
fca6676b-9815-4fbe-beae-b74cd7333fa2
date added to LUP
2021-01-27 14:37:29
date last changed
2021-02-12 12:36:47
@phdthesis{fca6676b-9815-4fbe-beae-b74cd7333fa2,
  abstract     = {{Non-smooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These first-order optimization algorithms are often simple,  well suited to solve large-scale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the per-iteration cost is usually low, the number of iterations can become prohibitively large for ill-conditioned problems, especially if a high accuracy solution is sought.<br/><br/>In this thesis, a few methods for alleviating this slow convergence are studied, which can be divided into two main approaches. The first are heuristic methods that can be applied to a range of fixed-point algorithms. They are based on understanding typical behavior of these algorithms. While these methods are shown to converge, they come with no guarantees on improved convergence rates.<br/><br/>The other approach studies the theoretical rates of a class of projection methods that are used to solve convex feasibility problems. These are problems where the goal is to find a point in the intersection of two, or possibly more, convex sets. A study of how the parameters in the algorithm affect the theoretical convergence rate is presented, as well as how they can be chosen to optimize this rate.}},
  author       = {{Fält, Mattias}},
  isbn         = {{978-91-7895-764-4}},
  issn         = {{0280-5316}},
  keywords     = {{First-order methods; Convex Optimization; Nonsmooth Optimization; Large-scale Optimization; Iterative Methods; Douglas-Rachford splitting; Line Search}},
  language     = {{eng}},
  month        = {{02}},
  number       = {{1130}},
  publisher    = {{Department of Automatic Control, Lund University}},
  school       = {{Lund University}},
  title        = {{Convergence Analysis and Improvements for Projection Algorithms and Splitting Methods}},
  url          = {{https://lup.lub.lu.se/search/files/90535299/thesis.pdf}},
  volume       = {{TFRT}},
  year         = {{2021}},
}