Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Non-attracting regions of local minima in deep and wide neural networks

Petzka, Henning LU and Sminchisescu, Cristian LU (2021) In Journal of Machine Learning Research 22. p.1-34
Abstract

Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e., non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our construction belong to a connected set of local... (More)

Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e., non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our construction belong to a connected set of local solutions that can be escaped from via a non-increasing path on the loss curve. For extremely wide neural networks of decreasing width after the wide layer, we prove that every suboptimal local minimum belongs to such a connected set. This provides a partial explanation for the successful application of deep neural networks. In addition, we also characterize under what conditions the same construction leads to saddle points instead of local minima for deep neural networks.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Deep learning, Global minima, Local minima, Neural network, Path
in
Journal of Machine Learning Research
volume
22
pages
34 pages
publisher
Microtome Publishing
external identifiers
  • scopus:85112433645
ISSN
1532-4435
language
English
LU publication?
yes
id
cc6e3533-f45d-46d2-9330-4d5334d2261c
alternative location
https://www.jmlr.org/papers/volume22/19-586/19-586.pdf
date added to LUP
2021-09-20 12:51:38
date last changed
2022-05-05 03:47:26
@article{cc6e3533-f45d-46d2-9330-4d5334d2261c,
  abstract     = {{<p>Understanding the loss surface of neural networks is essential for the design of models with predictable performance and their success in applications. Experimental results suggest that sufficiently deep and wide neural networks are not negatively impacted by suboptimal local minima. Despite recent progress, the reason for this outcome is not fully understood. Could deep networks have very few, if at all, suboptimal local optima? or could all of them be equally good? We provide a construction to show that suboptimal local minima (i.e., non-global ones), even though degenerate, exist for fully connected neural networks with sigmoid activation functions. The local minima obtained by our construction belong to a connected set of local solutions that can be escaped from via a non-increasing path on the loss curve. For extremely wide neural networks of decreasing width after the wide layer, we prove that every suboptimal local minimum belongs to such a connected set. This provides a partial explanation for the successful application of deep neural networks. In addition, we also characterize under what conditions the same construction leads to saddle points instead of local minima for deep neural networks. </p>}},
  author       = {{Petzka, Henning and Sminchisescu, Cristian}},
  issn         = {{1532-4435}},
  keywords     = {{Deep learning; Global minima; Local minima; Neural network; Path}},
  language     = {{eng}},
  month        = {{07}},
  pages        = {{1--34}},
  publisher    = {{Microtome Publishing}},
  series       = {{Journal of Machine Learning Research}},
  title        = {{Non-attracting regions of local minima in deep and wide neural networks}},
  url          = {{https://www.jmlr.org/papers/volume22/19-586/19-586.pdf}},
  volume       = {{22}},
  year         = {{2021}},
}