Noise Is Not the Main Factor Behind the Gap Between SGD and Adam on
Transformers, but Sign Descent Might Be
- URL: http://arxiv.org/abs/2304.13960v1
- Date: Thu, 27 Apr 2023 05:41:13 GMT
- Title: Noise Is Not the Main Factor Behind the Gap Between SGD and Adam on
Transformers, but Sign Descent Might Be
- Authors: Frederik Kunstner, Jacques Chen, Jonathan Wilder Lavington, Mark
Schmidt
- Abstract summary: We show that the behavior of Adam with large batches is similar to sign descent with momentum.
We present evidence thatity and heavy-tailed noise are not major factors in the performance gap between SGD and Adam.
- Score: 16.170888329408353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of the Adam optimizer on a wide array of architectures has made
it the default in settings where stochastic gradient descent (SGD) performs
poorly. However, our theoretical understanding of this discrepancy is lagging,
preventing the development of significant improvements on either algorithm.
Recent work advances the hypothesis that Adam and other heuristics like
gradient clipping outperform SGD on language tasks because the distribution of
the error induced by sampling has heavy tails. This suggests that Adam
outperform SGD because it uses a more robust gradient estimate. We evaluate
this hypothesis by varying the batch size, up to the entire dataset, to control
for stochasticity. We present evidence that stochasticity and heavy-tailed
noise are not major factors in the performance gap between SGD and Adam.
Rather, Adam performs better as the batch size increases, while SGD is less
effective at taking advantage of the reduction in noise. This raises the
question as to why Adam outperforms SGD in the full-batch setting. Through
further investigation of simpler variants of SGD, we find that the behavior of
Adam with large batches is similar to sign descent with momentum.
Related papers
- Adam Exploits $\ell_\infty$-geometry of Loss Landscape via Coordinate-wise Adaptivity [6.270305440413688]
We find that Adam performs worse when the favorable $ell_infty$-geometry is SGD while provably remains unaffected.
Our experiments confirm that Adam performs much worse when the favorable $ell_infty$-geometry is SGD while provably remains unaffected.
arXiv Detail & Related papers (2024-10-10T17:58:53Z) - On Convergence of Adam for Stochastic Optimization under Relaxed
Assumptions [4.9495085874952895]
Adaptive Momentum Estimation (Adam) algorithm is highly effective in various deep learning tasks.
We show that Adam can find a stationary point variance with a rate in high iterations under this general noise model.
arXiv Detail & Related papers (2024-02-06T13:19:26Z) - Provable Adaptivity of Adam under Non-uniform Smoothness [79.25087082434975]
Adam is widely adopted in practical applications due to its fast convergence.
Existing convergence analyses for Adam rely on the bounded smoothness assumption.
This paper studies the convergence of randomly reshuffled Adam with diminishing learning rate.
arXiv Detail & Related papers (2022-08-21T14:57:47Z) - Understanding AdamW through Proximal Methods and Scale-Freeness [57.47324825501137]
Adam is a generalization of the $ell$ regularizer Adam-$ell$.
AdamW decouples the gradient of Adam-$ell$ from the update rule of Adam-$ell$.
We show that AdamW exhibits an advantage over Adam-$ell$ and the degree to which we expect the gradients of the network to exhibit multiple scales.
arXiv Detail & Related papers (2022-01-31T21:00:55Z) - Why Does Multi-Epoch Training Help? [62.946840431501855]
Empirically, it has been observed that taking more one pass over training data (multi-pass SGD) has much better excess risk bound performance than SGD only taking one pass over training data (one-pass SGD)
In this paper, we provide some theoretical evidences for explaining why multiple passes over the training data can help improve performance under certain circumstances.
arXiv Detail & Related papers (2021-05-13T00:52:25Z) - Correcting Momentum with Second-order Information [50.992629498861724]
We develop a new algorithm for non-critical optimization that finds an $O(epsilon)$epsilon point in the optimal product.
We validate our results on a variety of large-scale deep learning benchmarks and architectures.
arXiv Detail & Related papers (2021-03-04T19:01:20Z) - Adam$^+$: A Stochastic Method with Adaptive Variance Reduction [56.051001950733315]
Adam is a widely used optimization method for deep learning applications.
We propose a new method named Adam$+$ (pronounced as Adam-plus)
Our empirical studies on various deep learning tasks, including image classification, language modeling, and automatic speech recognition, demonstrate that Adam$+$ significantly outperforms Adam.
arXiv Detail & Related papers (2020-11-24T09:28:53Z) - AdaSGD: Bridging the gap between SGD and Adam [14.886598905466604]
We identify potential differences in performance between SGD and Adam.
We demonstrate how AdaSGD combines the benefits both SGD Adam and SGD non- descent.
arXiv Detail & Related papers (2020-06-30T05:44:19Z) - AdamP: Slowing Down the Slowdown for Momentum Optimizers on
Scale-invariant Weights [53.8489656709356]
Normalization techniques are a boon for modern deep learning.
It is often overlooked, however, that the additional introduction of momentum results in a rapid reduction in effective step sizes for scale-invariant weights.
In this paper, we verify that the widely-adopted combination of the two ingredients lead to the premature decay of effective step sizes and sub-optimal model performances.
arXiv Detail & Related papers (2020-06-15T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.