Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

SRKCD : A stabilized Runge–Kutta method for stochastic optimization

Stillfjord, Tony LU orcid and Williamson, Måns LU orcid (2023) In Journal of Computational and Applied Mathematics 417.
Abstract

We introduce a family of stochastic optimization methods based on the Runge–Kutta–Chebyshev (RKC) schemes. The RKC methods are explicit methods originally designed for solving stiff ordinary differential equations by ensuring that their stability regions are of maximal size. In the optimization context, this allows for larger step sizes (learning rates) and better robustness compared to e.g. the popular stochastic gradient descent method. Our main contribution is a convergence proof for essentially all stochastic Runge–Kutta optimization methods. This shows convergence in expectation with an optimal sublinear rate under standard assumptions of strong convexity and Lipschitz-continuous gradients. For non-convex objectives, we get... (More)

We introduce a family of stochastic optimization methods based on the Runge–Kutta–Chebyshev (RKC) schemes. The RKC methods are explicit methods originally designed for solving stiff ordinary differential equations by ensuring that their stability regions are of maximal size. In the optimization context, this allows for larger step sizes (learning rates) and better robustness compared to e.g. the popular stochastic gradient descent method. Our main contribution is a convergence proof for essentially all stochastic Runge–Kutta optimization methods. This shows convergence in expectation with an optimal sublinear rate under standard assumptions of strong convexity and Lipschitz-continuous gradients. For non-convex objectives, we get convergence to zero in expectation of the gradients. The proof requires certain natural conditions on the Runge–Kutta coefficients, and we further demonstrate that the RKC schemes satisfy these. Finally, we illustrate the improved stability properties of the methods in practice by performing numerical experiments on both a small-scale test example and on a problem arising from an image classification application in machine learning.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Convergence analysis, Runge–Kutta–Chebyshev, Stability, Stochastic optimization
in
Journal of Computational and Applied Mathematics
volume
417
article number
114575
publisher
Elsevier
external identifiers
  • scopus:85134801255
ISSN
0377-0427
DOI
10.1016/j.cam.2022.114575
language
English
LU publication?
yes
id
5b38c89b-b72b-40d3-90a7-194ea1bd7d71
date added to LUP
2022-08-25 15:48:35
date last changed
2022-08-25 15:48:35
@article{5b38c89b-b72b-40d3-90a7-194ea1bd7d71,
  abstract     = {{<p>We introduce a family of stochastic optimization methods based on the Runge–Kutta–Chebyshev (RKC) schemes. The RKC methods are explicit methods originally designed for solving stiff ordinary differential equations by ensuring that their stability regions are of maximal size. In the optimization context, this allows for larger step sizes (learning rates) and better robustness compared to e.g. the popular stochastic gradient descent method. Our main contribution is a convergence proof for essentially all stochastic Runge–Kutta optimization methods. This shows convergence in expectation with an optimal sublinear rate under standard assumptions of strong convexity and Lipschitz-continuous gradients. For non-convex objectives, we get convergence to zero in expectation of the gradients. The proof requires certain natural conditions on the Runge–Kutta coefficients, and we further demonstrate that the RKC schemes satisfy these. Finally, we illustrate the improved stability properties of the methods in practice by performing numerical experiments on both a small-scale test example and on a problem arising from an image classification application in machine learning.</p>}},
  author       = {{Stillfjord, Tony and Williamson, Måns}},
  issn         = {{0377-0427}},
  keywords     = {{Convergence analysis; Runge–Kutta–Chebyshev; Stability; Stochastic optimization}},
  language     = {{eng}},
  publisher    = {{Elsevier}},
  series       = {{Journal of Computational and Applied Mathematics}},
  title        = {{SRKCD : A stabilized Runge–Kutta method for stochastic optimization}},
  url          = {{http://dx.doi.org/10.1016/j.cam.2022.114575}},
  doi          = {{10.1016/j.cam.2022.114575}},
  volume       = {{417}},
  year         = {{2023}},
}