Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

ACCELERATED FORWARD-BACKWARD OPTIMIZATION USING DEEP LEARNING

Banert, Sebastian LU ; Rudzusika, Jevgenija ; Öktem, Ozan and Adler, Jonas (2024) In SIAM Journal on Optimization 34(2). p.1236-1263
Abstract

We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update within a given space. Finally, we show that the method is applicable to several cases of smooth and nonsmooth optimization and show superior results to established accelerated solvers.

Please use this url to cite or link to this publication:
author
; ; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
convex optimization, deep learning, inverse problems, proximal-gradient algorithm
in
SIAM Journal on Optimization
volume
34
issue
2
pages
28 pages
publisher
Society for Industrial and Applied Mathematics
external identifiers
  • scopus:85190534496
ISSN
1052-6234
DOI
10.1137/22M1532548
language
English
LU publication?
yes
id
3c5f49cc-a846-4bd7-b62c-8d971763ed7f
date added to LUP
2024-04-29 10:23:43
date last changed
2024-04-29 10:24:51
@article{3c5f49cc-a846-4bd7-b62c-8d971763ed7f,
  abstract     = {{<p>We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update within a given space. Finally, we show that the method is applicable to several cases of smooth and nonsmooth optimization and show superior results to established accelerated solvers.</p>}},
  author       = {{Banert, Sebastian and Rudzusika, Jevgenija and Öktem, Ozan and Adler, Jonas}},
  issn         = {{1052-6234}},
  keywords     = {{convex optimization; deep learning; inverse problems; proximal-gradient algorithm}},
  language     = {{eng}},
  number       = {{2}},
  pages        = {{1236--1263}},
  publisher    = {{Society for Industrial and Applied Mathematics}},
  series       = {{SIAM Journal on Optimization}},
  title        = {{ACCELERATED FORWARD-BACKWARD OPTIMIZATION USING DEEP LEARNING}},
  url          = {{http://dx.doi.org/10.1137/22M1532548}},
  doi          = {{10.1137/22M1532548}},
  volume       = {{34}},
  year         = {{2024}},
}