Catapult Dynamics and Phase Transitions in Quadratic Nets
- URL: http://arxiv.org/abs/2301.07737v1
- Date: Wed, 18 Jan 2023 19:03:48 GMT
- Title: Catapult Dynamics and Phase Transitions in Quadratic Nets
- Authors: David Meltzer, Junyu Liu
- Abstract summary: We will prove that the catapult phase exists in a large class of models, including quadratic models and two-layer, homogenous neural nets.
We show that for a certain range of learning rates the weight norm decreases whenever the loss becomes large.
We also empirically study learning rates beyond this theoretically derived range and show that the activation map of ReLU nets trained with super-critical learning rates becomes increasingly sparse as we increase the learning rate.
- Score: 10.32543637637479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks trained with gradient descent can undergo non-trivial phase
transitions as a function of the learning rate. In (Lewkowycz et al., 2020) it
was discovered that wide neural nets can exhibit a catapult phase for
super-critical learning rates, where the training loss grows exponentially
quickly at early times before rapidly decreasing to a small value. During this
phase the top eigenvalue of the neural tangent kernel (NTK) also undergoes
significant evolution. In this work, we will prove that the catapult phase
exists in a large class of models, including quadratic models and two-layer,
homogenous neural nets. To do this, we show that for a certain range of
learning rates the weight norm decreases whenever the loss becomes large. We
also empirically study learning rates beyond this theoretically derived range
and show that the activation map of ReLU nets trained with super-critical
learning rates becomes increasingly sparse as we increase the learning rate.
Related papers
- Understanding the Generalization Benefits of Late Learning Rate Decay [14.471831651042367]
We show the relation between training and testing loss in neural networks.
We introduce a nonlinear model whose loss landscapes mirror those observed for real neural networks.
We demonstrate that an extended phase with a large learning rate steers our model towards the minimum norm solution of the training loss.
arXiv Detail & Related papers (2024-01-21T21:11:09Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Early Stage Convergence and Global Convergence of Training Mildly
Parameterized Neural Networks [3.148524502470734]
We show that the loss is decreased by a significant amount in the early stage of the training, and this decrease is fast.
We use a microscopic analysis of the activation patterns for the neurons, which helps us derive more powerful lower bounds for the gradient.
arXiv Detail & Related papers (2022-06-05T09:56:50Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Neural networks with late-phase weights [66.72777753269658]
We show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning.
At the end of learning, we obtain back a single model by taking a spatial average in weight space.
arXiv Detail & Related papers (2020-07-25T13:23:37Z) - Plateau Phenomenon in Gradient Descent Training of ReLU networks:
Explanation, Quantification and Avoidance [0.0]
In general, neural networks are trained by gradient type optimization methods.
The loss function decreases rapidly at the beginning of training but then, after a relatively small number of steps, significantly slow down.
The present work aims to identify and quantify the root causes of plateau phenomenon.
arXiv Detail & Related papers (2020-07-14T17:33:26Z) - The Surprising Simplicity of the Early-Time Learning Dynamics of Neural
Networks [43.860358308049044]
In work, we show that these common perceptions can be completely false in the early phase of learning.
We argue that this surprising simplicity can persist in networks with more layers with convolutional architecture.
arXiv Detail & Related papers (2020-06-25T17:42:49Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z) - The Break-Even Point on Optimization Trajectories of Deep Neural
Networks [64.7563588124004]
We argue for the existence of the "break-even" point on this trajectory.
We show that using a large learning rate in the initial phase of training reduces the variance of the gradient.
We also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers.
arXiv Detail & Related papers (2020-02-21T22:55:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.