*Abstract*:
We study the properties of common loss surfaces through their Hessian matrix. In
particular, in the context of deep learning, we empirically show that the spectrum
of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and
outliers away from the bulk. We present numerical evidence and mathematical
justifications to the following conjectures laid out by Sagun et al. (2016): Fixing
data, increasing the number of parameters merely scales the bulk of the spectrum;
fixing the dimension and changing the data (for instance adding more clusters
or making the data less separable) only affects the outliers. We believe that
our observations have striking implications for non-convex optimization in high
dimensions. First, the flatness of such landscapes (which can be measured by the
singularity of the Hessian) implies that classical notions of basins of attraction may
be quite misleading. And that the discussion of wide/narrow basins may be in need
of a new perspective around over-parametrization and redundancy that are able to
create large connected components at the bottom of the landscape. Second, the
dependence of a small number of large eigenvalues to the data distribution can
be linked to the spectrum of the covariance matrix of gradients of model outputs.
With this in mind, we may reevaluate the connections within the data-architecturealgorithm
framework of a model, hoping that it would shed light on the geometry
of high-dimensional and non-convex spaces in modern applications. In particular,
we present a case that links the two observations: small and large batch gradient
descent appear to converge to different basins of attraction but we show that they
are in fact connected through their flat region and so belong to the same basin.

Levent Sagun, Utku Evci, Veli Uğur Güney, Yann Dauphin and Léon Bottou: **Empirical Analysis of the Hessian of Over-Parametrized Neural Networks**, *Sixth International Conference on Learning Representations (ICLR), Workshop paper*, 2018.

@inproceedings{sagun-2018, author = {Sagun, Levent and Evci, Utku and G\"{u}ney, Veli U\u{g}ur and Dauphin, Yann and Bottou, L\'{e}on}, title = {Empirical Analysis of the {Hessian} of Over-Parametrized Neural Networks}, booktitle = {Sixth International Conference on Learning Representations (ICLR), Workshop paper}, year = {2018}, url = {http://leon.bottou.org/papers/sagun-2018}, }

papers/sagun-2018.txt · Last modified: 2021/04/16 16:28 by leonb