Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of
Stability
- URL: http://arxiv.org/abs/2209.15594v2
- Date: Mon, 10 Apr 2023 22:32:40 GMT
- Title: Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of
Stability
- Authors: Alex Damian, Eshaan Nichani, Jason D. Lee
- Abstract summary: We show that gradient descent at the edge of stability implicitly follows projected gradient descent (PGD) under the constraint $S(theta) le 2/eta$.
Our analysis provides precise predictions for the loss, sharpness, and deviation from the PGD trajectory throughout training.
- Score: 40.17821914923602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional analyses of gradient descent show that when the largest
eigenvalue of the Hessian, also known as the sharpness $S(\theta)$, is bounded
by $2/\eta$, training is "stable" and the training loss decreases
monotonically. Recent works, however, have observed that this assumption does
not hold when training modern neural networks with full batch or large batch
gradient descent. Most recently, Cohen et al. (2021) observed two important
phenomena. The first, dubbed progressive sharpening, is that the sharpness
steadily increases throughout training until it reaches the instability cutoff
$2/\eta$. The second, dubbed edge of stability, is that the sharpness hovers at
$2/\eta$ for the remainder of training while the loss continues decreasing,
albeit non-monotonically. We demonstrate that, far from being chaotic, the
dynamics of gradient descent at the edge of stability can be captured by a
cubic Taylor expansion: as the iterates diverge in direction of the top
eigenvector of the Hessian due to instability, the cubic term in the local
Taylor expansion of the loss function causes the curvature to decrease until
stability is restored. This property, which we call self-stabilization, is a
general property of gradient descent and explains its behavior at the edge of
stability. A key consequence of self-stabilization is that gradient descent at
the edge of stability implicitly follows projected gradient descent (PGD) under
the constraint $S(\theta) \le 2/\eta$. Our analysis provides precise
predictions for the loss, sharpness, and deviation from the PGD trajectory
throughout training, which we verify both empirically in a number of standard
settings and theoretically under mild conditions. Our analysis uncovers the
mechanism for gradient descent's implicit bias towards stability.
Related papers
- High dimensional analysis reveals conservative sharpening and a stochastic edge of stability [21.12433806766051]
We show that the dynamics of the large eigenvalues of the training loss Hessian have some remarkably robust features across models and in the full batch regime.
There is often an early period of progressive sharpening where the large eigenvalues increase, followed by stabilization at a predictable value known as the edge of stability.
arXiv Detail & Related papers (2024-04-30T04:54:15Z) - Sharpness-Aware Minimization and the Edge of Stability [35.27697224229969]
We show that when training a neural network with gradient descent (GD) with a step size $eta$, the norm of the Hessian of the loss grows until it approximately reaches $2/eta$, after which it fluctuates around this value.
We perform a similar calculation to arrive at an "edge of stability" for Sharpness-Aware Minimization (SAM)
Unlike the case for GD, the resulting SAM-edge depends on the norm of the gradient. Using three deep learning training tasks, we see empirically that SAM operates on the edge of stability identified by this analysis.
arXiv Detail & Related papers (2023-09-21T21:15:51Z) - Estimator Meets Equilibrium Perspective: A Rectified Straight Through
Estimator for Binary Neural Networks Training [35.090598013305275]
Binarization of neural networks is a dominant paradigm in neural networks compression.
We propose Rectified Straight Through Estimator (ReSTE) to balance the estimating error and the gradient stability.
ReSTE has excellent performance and surpasses the state-of-the-art methods without any auxiliary modules or losses.
arXiv Detail & Related papers (2023-08-13T05:38:47Z) - The Implicit Regularization of Dynamical Stability in Stochastic
Gradient Descent [32.25490196411385]
We study the implicit regularization of gradient descent (SGD) through the lens of em dynamical stability
We analyze the generalization properties of two-layer ReLU networks and diagonal linear networks.
arXiv Detail & Related papers (2023-05-27T14:54:21Z) - Beyond the Edge of Stability via Two-step Gradient Updates [49.03389279816152]
Gradient Descent (GD) is a powerful workhorse of modern machine learning.
GD's ability to find local minimisers is only guaranteed for losses with Lipschitz gradients.
This work focuses on simple, yet representative, learning problems via analysis of two-step gradient updates.
arXiv Detail & Related papers (2022-06-08T21:32:50Z) - Understanding the unstable convergence of gradient descent [51.40523554349091]
In machine learning applications step sizes often do not fulfill the condition that for $L$-smooth cost, the step size is less than $2/L$.
We investigate this unstable convergence phenomenon from first principles, and elucidate key causes behind it.
We also identify its main characteristics, and how they interrelate, offering a transparent view backed by both theory and experiments.
arXiv Detail & Related papers (2022-04-03T11:10:17Z) - High-probability Bounds for Non-Convex Stochastic Optimization with
Heavy Tails [55.561406656549686]
We consider non- Hilbert optimization using first-order algorithms for which the gradient estimates may have tails.
We show that a combination of gradient, momentum, and normalized gradient descent convergence to critical points in high-probability with best-known iteration for smooth losses.
arXiv Detail & Related papers (2021-06-28T00:17:01Z) - Gradient Descent on Neural Networks Typically Occurs at the Edge of
Stability [94.4070247697549]
Full-batch gradient descent on neural network training objectives operates in a regime we call the Edge of Stability.
In this regime, the maximum eigenvalue of the training loss Hessian hovers just above the numerical value $2 / text(step size)$, and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales.
arXiv Detail & Related papers (2021-02-26T22:08:19Z) - Fine-Grained Analysis of Stability and Generalization for Stochastic
Gradient Descent [55.85456985750134]
We introduce a new stability measure called on-average model stability, for which we develop novel bounds controlled by the risks of SGD iterates.
This yields generalization bounds depending on the behavior of the best model, and leads to the first-ever-known fast bounds in the low-noise setting.
To our best knowledge, this gives the firstever-known stability and generalization for SGD with even non-differentiable loss functions.
arXiv Detail & Related papers (2020-06-15T06:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.