Curvature-Informed SGD via General Purpose Lie-Group Preconditioners
- URL: http://arxiv.org/abs/2402.04553v1
- Date: Wed, 7 Feb 2024 03:18:00 GMT
- Title: Curvature-Informed SGD via General Purpose Lie-Group Preconditioners
- Authors: Omead Pooladzandi and Xi-Lin Li
- Abstract summary: We present a novel approach to accelerate gradient descent (SGD) by utilizing curvature information.
Our approach involves two preconditioners: a matrix-free preconditioner and a low-rank approximation preconditioner.
We demonstrate that Preconditioned SGD (PSGD) outperforms SoTA on Vision, NLP, and RL tasks.
- Score: 6.760212042305871
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a novel approach to accelerate stochastic gradient descent (SGD)
by utilizing curvature information obtained from Hessian-vector products or
finite differences of parameters and gradients, similar to the BFGS algorithm.
Our approach involves two preconditioners: a matrix-free preconditioner and a
low-rank approximation preconditioner. We update both preconditioners online
using a criterion that is robust to stochastic gradient noise and does not
require line search or damping. To preserve the corresponding symmetry or
invariance, our preconditioners are constrained to certain connected Lie
groups. The Lie group's equivariance property simplifies the preconditioner
fitting process, while its invariance property eliminates the need for damping,
which is commonly required in second-order optimizers. As a result, the
learning rate for parameter updating and the step size for preconditioner
fitting are naturally normalized, and their default values work well in most
scenarios. Our proposed approach offers a promising direction for improving the
convergence of SGD with low computational overhead. We demonstrate that
Preconditioned SGD (PSGD) outperforms SoTA on Vision, NLP, and RL tasks across
multiple modern deep-learning architectures. We have provided code for
reproducing toy and large scale experiments in this paper.
Related papers
- Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions [18.47705532817026]
We show that AdaGrad outperforms SGD by a factor of $d$ under certain conditions.
Motivated by this, we introduce assumptions on the smoothness structure of the objective and the gradient variance.
arXiv Detail & Related papers (2024-06-07T02:55:57Z) - Transformers as Support Vector Machines [54.642793677472724]
We establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem.
We characterize the implicit bias of 1-layer transformers optimized with gradient descent.
We believe these findings inspire the interpretation of transformers as a hierarchy of SVMs that separates and selects optimal tokens.
arXiv Detail & Related papers (2023-08-31T17:57:50Z) - Black Box Lie Group Preconditioners for SGD [13.30021794793606]
A matrix free and a low rank approximation preconditioner are proposed to accelerate the convergence of gradient descent.
The learning rate for parameter updating and step size for preconditioner fitting are naturally normalized, and their default values work well in most situations.
arXiv Detail & Related papers (2022-11-08T18:07:08Z) - The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded
Gradients and Affine Variance [46.15915820243487]
We show that AdaGrad-Norm exhibits an order optimal convergence of $mathcalOleft.
We show that AdaGrad-Norm exhibits an order optimal convergence of $mathcalOleft.
arXiv Detail & Related papers (2022-02-11T17:37:54Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - An adaptive Hessian approximated stochastic gradient MCMC method [12.93317525451798]
We present an adaptive Hessian approximated gradient MCMC method to incorporate local geometric information while sampling from the posterior.
We adopt a magnitude-based weight pruning method to enforce the sparsity of the network.
arXiv Detail & Related papers (2020-10-03T16:22:15Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Bayesian Sparse learning with preconditioned stochastic gradient MCMC
and its applications [5.660384137948734]
The proposed algorithm converges to the correct distribution with a controllable bias under mild conditions.
We show that the proposed algorithm canally converge to the correct distribution with a controllable bias under mild conditions.
arXiv Detail & Related papers (2020-06-29T20:57:20Z) - MaxVA: Fast Adaptation of Step Sizes by Maximizing Observed Variance of
Gradients [112.00379151834242]
We propose adaptive learning rate principle, in which the running mean of squared gradient in Adam is replaced by a weighted mean, with weights chosen to maximize the estimated variance each coordinate.
This results in faster adaptation, which leads more desirable empirical convergence behaviors.
arXiv Detail & Related papers (2020-06-21T21:47:43Z) - When Does Preconditioning Help or Hurt Generalization? [74.25170084614098]
We show how the textitimplicit bias of first and second order methods affects the comparison of generalization properties.
We discuss several approaches to manage the bias-variance tradeoff, and the potential benefit of interpolating between GD and NGD.
arXiv Detail & Related papers (2020-06-18T17:57:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.