Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

FAM : Relative Flatness Aware Minimization

Adilova, Linara ; Abourayya, Amr ; Li, Jianning ; Dada, Amin ; Petzka, Henning LU ; Egger, Jan ; Kleesiek, Jens and Kamp, Michael (2023) 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning, TAG-ML 2023, held at the International Conference on Machine Learning, ICML 2023 In Proceedings of Machine Learning Research 221. p.37-49
Abstract

Flatness of the loss curve around a model at hand has been shown to empirically correlate with its generalization ability. Optimizing for flatness has been proposed as early as 1994 by Hochreiter and Schmidthuber, and was followed by more recent successful sharpness-aware optimization techniques. Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse—certain reparameterizations of a neural network change most flatness measures but do not change generalization. Recent theoretical work suggests that a particular relative flatness measure can be connected to generalization and solves the... (More)

Flatness of the loss curve around a model at hand has been shown to empirically correlate with its generalization ability. Optimizing for flatness has been proposed as early as 1994 by Hochreiter and Schmidthuber, and was followed by more recent successful sharpness-aware optimization techniques. Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse—certain reparameterizations of a neural network change most flatness measures but do not change generalization. Recent theoretical work suggests that a particular relative flatness measure can be connected to generalization and solves the reparameterization curse. In this paper, we derive a regularizer based on this relative flatness that is easy to compute, fast, efficient, and works with arbitrary loss functions. It requires computing the Hessian only of a single layer of the network, which makes it applicable to large neural networks, and with it avoids an expensive mapping of the loss surface in the vicinity of the model. In an extensive empirical evaluation we show that this relative flatness aware minimization (FAM) improves generalization in a multitude of applications and models, both in finetuning and standard training. We make the code available at github.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; ; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Proceedings of Machine Learning Research
series title
Proceedings of Machine Learning Research
volume
221
pages
13 pages
conference name
2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning, TAG-ML 2023, held at the International Conference on Machine Learning, ICML 2023
conference location
Honolulu, United States
conference dates
2023-07-28
external identifiers
  • scopus:85178655816
language
English
LU publication?
yes
id
8e15b359-a4f0-44d4-a00f-487a642d9970
alternative location
https://proceedings.mlr.press/v221/adilova23a/adilova23a.pdf
date added to LUP
2024-01-11 12:51:59
date last changed
2024-01-11 12:54:06
@inproceedings{8e15b359-a4f0-44d4-a00f-487a642d9970,
  abstract     = {{<p>Flatness of the loss curve around a model at hand has been shown to empirically correlate with its generalization ability. Optimizing for flatness has been proposed as early as 1994 by Hochreiter and Schmidthuber, and was followed by more recent successful sharpness-aware optimization techniques. Their widespread adoption in practice, though, is dubious because of the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse—certain reparameterizations of a neural network change most flatness measures but do not change generalization. Recent theoretical work suggests that a particular relative flatness measure can be connected to generalization and solves the reparameterization curse. In this paper, we derive a regularizer based on this relative flatness that is easy to compute, fast, efficient, and works with arbitrary loss functions. It requires computing the Hessian only of a single layer of the network, which makes it applicable to large neural networks, and with it avoids an expensive mapping of the loss surface in the vicinity of the model. In an extensive empirical evaluation we show that this relative flatness aware minimization (FAM) improves generalization in a multitude of applications and models, both in finetuning and standard training. We make the code available at github.</p>}},
  author       = {{Adilova, Linara and Abourayya, Amr and Li, Jianning and Dada, Amin and Petzka, Henning and Egger, Jan and Kleesiek, Jens and Kamp, Michael}},
  booktitle    = {{Proceedings of Machine Learning Research}},
  language     = {{eng}},
  pages        = {{37--49}},
  series       = {{Proceedings of Machine Learning Research}},
  title        = {{FAM : Relative Flatness Aware Minimization}},
  url          = {{https://proceedings.mlr.press/v221/adilova23a/adilova23a.pdf}},
  volume       = {{221}},
  year         = {{2023}},
}