On the Generalization Benefit of Noise in Stochastic Gradient Descent
- URL: http://arxiv.org/abs/2006.15081v1
- Date: Fri, 26 Jun 2020 16:18:54 GMT
- Title: On the Generalization Benefit of Noise in Stochastic Gradient Descent
- Authors: Samuel L. Smith, Erich Elsen, Soham De
- Abstract summary: It has long been argued that minibatch gradient descent can generalize better than large batch gradient descent in deep neural networks.
We show that small or moderately large batch sizes can substantially outperform very large batches on the test set.
- Score: 34.127525925676416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has long been argued that minibatch stochastic gradient descent can
generalize better than large batch gradient descent in deep neural networks.
However recent papers have questioned this claim, arguing that this effect is
simply a consequence of suboptimal hyperparameter tuning or insufficient
compute budgets when the batch size is large. In this paper, we perform
carefully designed experiments and rigorous hyperparameter sweeps on a range of
popular models, which verify that small or moderately large batch sizes can
substantially outperform very large batches on the test set. This occurs even
when both models are trained for the same number of iterations and large
batches achieve smaller training losses. Our results confirm that the noise in
stochastic gradients can enhance generalization. We study how the optimal
learning rate schedule changes as the epoch budget grows, and we provide a
theoretical account of our observations based on the stochastic differential
equation perspective of SGD dynamics.
Related papers
- Inference and Interference: The Role of Clipping, Pruning and Loss
Landscapes in Differentially Private Stochastic Gradient Descent [13.27004430044574]
Differentially private gradient descent (DP-SGD) is known to have poorer training and test performance on large neural networks.
We compare the behavior of the two processes separately in early and late epochs.
We find that while DP-SGD makes slower progress in early stages, it is the behavior in the later stages that determines the end result.
arXiv Detail & Related papers (2023-11-12T13:31:35Z) - Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model [89.8764435351222]
We propose a new family of unbiased estimators called WTA-CRS, for matrix production with reduced variance.
Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.
arXiv Detail & Related papers (2023-05-24T15:52:08Z) - SGD with Large Step Sizes Learns Sparse Features [22.959258640051342]
We showcase important features of the dynamics of the Gradient Descent (SGD) in the training of neural networks.
We show that the longer large step sizes keep SGD high in the loss landscape, the better the implicit regularization can operate and find sparse representations.
arXiv Detail & Related papers (2022-10-11T11:00:04Z) - Critical Bach Size Minimizes Stochastic First-Order Oracle Complexity of
Deep Learning Optimizer using Hyperparameters Close to One [0.0]
We show that deep learnings using small constant learning rates, hyper parameters close to one, and large batch sizes can find the model parameters of deep neural networks that minimize the loss functions.
Results indicate that Adam using a small constant learning rate, hyper parameters close to one, and the critical batch size minimizing SFO complexity has faster convergence than Momentum and gradient descent.
arXiv Detail & Related papers (2022-08-21T06:11:23Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Stochastic Training is Not Necessary for Generalization [57.04880404584737]
It is widely believed that the implicit regularization of gradient descent (SGD) is fundamental to the impressive generalization behavior we observe in neural networks.
In this work, we demonstrate that non-stochastic full-batch training can achieve strong performance on CIFAR-10 that is on-par with SGD.
arXiv Detail & Related papers (2021-09-29T00:50:00Z) - Critical Parameters for Scalable Distributed Learning with Large Batches
and Asynchronous Updates [67.19481956584465]
It has been experimentally observed that the efficiency of distributed training with saturation (SGD) depends decisively on the batch size and -- in implementations -- on the staleness.
We show that our results are tight and illustrate key findings in numerical experiments.
arXiv Detail & Related papers (2021-03-03T12:08:23Z) - Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections [73.95786440318369]
We focus on the so-called implicit effect' of GNIs, which is the effect of the injected noise on the dynamics of gradient descent (SGD)
We show that this effect induces an asymmetric heavy-tailed noise on gradient updates.
We then formally prove that GNIs induce an implicit bias', which varies depending on the heaviness of the tails and the level of asymmetry.
arXiv Detail & Related papers (2021-02-13T21:28:09Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.