An Adaptive Gradient Method with Energy and Momentum
- URL: http://arxiv.org/abs/2203.12191v1
- Date: Wed, 23 Mar 2022 04:48:38 GMT
- Title: An Adaptive Gradient Method with Energy and Momentum
- Authors: Hailiang Liu and Xuping Tian
- Abstract summary: We introduce a novel algorithm for gradient-based optimization of objective functions.
The method is simple to implement, computationally efficient, and well suited for large-scale machine learning problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel algorithm for gradient-based optimization of stochastic
objective functions. The method may be seen as a variant of SGD with momentum
equipped with an adaptive learning rate automatically adjusted by an 'energy'
variable. The method is simple to implement, computationally efficient, and
well suited for large-scale machine learning problems. The method exhibits
unconditional energy stability for any size of the base learning rate. We
provide a regret bound on the convergence rate under the online convex
optimization framework. We also establish the energy-dependent convergence rate
of the algorithm to a stationary point in the stochastic non-convex setting. In
addition, a sufficient condition is provided to guarantee a positive lower
threshold for the energy variable. Our experiments demonstrate that the
algorithm converges fast while generalizing better than or as well as SGD with
momentum in training deep neural networks, and compares also favorably to Adam.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.