Closing the Gap Between the Upper Bound and the Lower Bound of Adam's
Iteration Complexity
- URL: http://arxiv.org/abs/2310.17998v1
- Date: Fri, 27 Oct 2023 09:16:58 GMT
- Title: Closing the Gap Between the Upper Bound and the Lower Bound of Adam's
Iteration Complexity
- Authors: Bohan Wang, Jingwen Fu, Huishuai Zhang, Nanning Zheng, Wei Chen
- Abstract summary: We derive a new convergence guarantee of Adam, with only an $L$-smooth condition and a bounded noise variance assumption.
Our proof utilizes novel techniques to handle the entanglement between momentum and adaptive learning rate.
- Score: 51.96093077151991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Arjevani et al. [1] established a lower bound of iteration
complexity for the first-order optimization under an $L$-smooth condition and a
bounded noise variance assumption. However, a thorough review of existing
literature on Adam's convergence reveals a noticeable gap: none of them meet
the above lower bound. In this paper, we close the gap by deriving a new
convergence guarantee of Adam, with only an $L$-smooth condition and a bounded
noise variance assumption. Our results remain valid across a broad spectrum of
hyperparameters. Especially with properly chosen hyperparameters, we derive an
upper bound of the iteration complexity of Adam and show that it meets the
lower bound for first-order optimizers. To the best of our knowledge, this is
the first to establish such a tight upper bound for Adam's convergence. Our
proof utilizes novel techniques to handle the entanglement between momentum and
adaptive learning rate and to convert the first-order term in the Descent Lemma
to the gradient norm, which may be of independent interest.
Related papers
- Convergence Guarantees for RMSProp and Adam in Generalized-smooth Non-convex Optimization with Affine Noise Variance [23.112775335244258]
We first analyze RMSProp, which is a special case of Adam with adaptive learning rates but without first-order momentum.
We develop a new upper bound first-order term in the descent lemma, which is also a function of the gradient norm.
Our results for both RMSProp and Adam match with the complexity established in citearvani2023lower.
arXiv Detail & Related papers (2024-04-01T19:17:45Z) - On the Convergence of Adam under Non-uniform Smoothness: Separability from SGDM and Beyond [35.65852208995095]
We demonstrate that Adam achieves a faster convergence compared to SGDM under the condition of non-uniformly bounded smoothness.
Our findings reveal that: (1) in deterministic environments, Adam can attain the known lower bound for the convergence rate of deterministic first-orders, whereas the convergence rate of Gradient Descent with Momentum (GDM) has higher order dependence on the initial function value.
arXiv Detail & Related papers (2024-03-22T11:57:51Z) - High Probability Convergence of Adam Under Unbounded Gradients and
Affine Variance Noise [4.9495085874952895]
We show that Adam could converge to the stationary point in high probability with a rate of $mathcalOleft(rm poly(log T)/sqrtTright)$ under coordinate-wise "affine" noise variance.
It is also revealed that Adam's confines within an order of $mathcalOleft(rm poly(left T)right)$ are adaptive to the noise level.
arXiv Detail & Related papers (2023-11-03T15:55:53Z) - Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters
and Non-ergodic Case [0.0]
This paper focuses on exploring the convergence of vanilla Adam and the challenges of non-ergodic convergence.
These findings build a solid theoretical foundation for Adam to solve non-godic optimization problems.
arXiv Detail & Related papers (2023-07-20T12:02:17Z) - Convergence of Adam Under Relaxed Assumptions [72.24779199744954]
We show that Adam converges to $epsilon$-stationary points with $O(epsilon-4)$ gradient complexity under far more realistic conditions.
We also propose a variance-reduced version of Adam with an accelerated gradient complexity of $O(epsilon-3)$.
arXiv Detail & Related papers (2023-04-27T06:27:37Z) - A Novel Convergence Analysis for Algorithms of the Adam Family [105.22760323075008]
We present a generic proof of convergence for a family of Adam-style methods including Adam, AMSGrad, Adabound, etc.
Our analysis is so simple and generic that it can be leveraged to establish the convergence for solving a broader family of non- compositional optimization problems.
arXiv Detail & Related papers (2021-12-07T02:47:58Z) - Sharp Bounds for Federated Averaging (Local SGD) and Continuous
Perspective [49.17352150219212]
Federated AveragingFedAvg, also known as Local SGD, is one of the most popular algorithms in Federated Learning (FL)
We show how to analyze this quantity from the Differential Equation (SDE) perspective.
arXiv Detail & Related papers (2021-11-05T22:16:11Z) - Adam$^+$: A Stochastic Method with Adaptive Variance Reduction [56.051001950733315]
Adam is a widely used optimization method for deep learning applications.
We propose a new method named Adam$+$ (pronounced as Adam-plus)
Our empirical studies on various deep learning tasks, including image classification, language modeling, and automatic speech recognition, demonstrate that Adam$+$ significantly outperforms Adam.
arXiv Detail & Related papers (2020-11-24T09:28:53Z) - A Simple Convergence Proof of Adam and Adagrad [74.24716715922759]
We show a proof of convergence between the Adam Adagrad and $O(d(N)/st)$ algorithms.
Adam converges with the same convergence $O(d(N)/st)$ when used with the default parameters.
arXiv Detail & Related papers (2020-03-05T01:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.