Adaptive scaling of the learning rate by second order automatic
differentiation
- URL: http://arxiv.org/abs/2210.14520v1
- Date: Wed, 26 Oct 2022 07:14:56 GMT
- Title: Adaptive scaling of the learning rate by second order automatic
differentiation
- Authors: Fr\'ed\'eric de Gournay (IMT, INSA Toulouse), Alban Gossard (IMT, UT3)
- Abstract summary: We propose to rescale the learning rate using a new technique of automatic differentiation.
The rescaling is adaptive, it depends on the data and on the direction of descent.
The numerical experiments highlight the different exploration/convergence regimes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of the optimization of Deep Neural Networks, we propose to
rescale the learning rate using a new technique of automatic differentiation.
This technique relies on the computation of the {\em curvature}, a second order
information whose computational complexity is in between the computation of the
gradient and the one of the Hessian-vector product. If (1C,1M) represents
respectively the computational time and memory footprint of the gradient
method, the new technique increase the overall cost to either (1.5C,2M) or
(2C,1M). This rescaling has the appealing characteristic of having a natural
interpretation, it allows the practitioner to choose between exploration of the
parameters set and convergence of the algorithm. The rescaling is adaptive, it
depends on the data and on the direction of descent. The numerical experiments
highlight the different exploration/convergence regimes.
Related papers
- Approximating Metric Magnitude of Point Sets [4.522729058300309]
Metric magnitude is a measure of the "size" of point clouds with many desirable geometric properties.
It has been adapted to various mathematical contexts and recent work suggests that it can enhance machine learning and optimization algorithms.
In this paper, we study the magnitude problem, and show efficient ways of approximating it. We show that it can be cast as a convex optimization problem, but not as a submodular optimization.
The paper describes two new algorithms - an iterative approximation algorithm that converges fast and is accurate, and a subset selection method that makes the computation even faster.
arXiv Detail & Related papers (2024-09-06T17:15:28Z) - Information Geometry and Beta Link for Optimizing Sparse Variational Student-t Processes [6.37512592611305]
Student-t Processes has been proposed to enhance computational efficiency and flexibility for real-world datasets using gradient descent.
Traditional gradient descent methods like Adam may not fully exploit the parameter space geometry, potentially leading to slower convergence and suboptimal performance.
We adopt natural gradient methods from information geometry for variational parameter optimization of Student-t Processes.
arXiv Detail & Related papers (2024-08-13T07:53:39Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Forward Gradient-Based Frank-Wolfe Optimization for Memory Efficient Deep Neural Network Training [0.0]
This paper focuses on analyzing the performance of the well-known Frank-Wolfe algorithm.
We show the proposed algorithm does converge to the optimal solution with a sub-linear rate of convergence.
In contrast, the standard Frank-Wolfe algorithm, when provided with access to the Projected Forward Gradient, fails to converge to the optimal solution.
arXiv Detail & Related papers (2024-03-19T07:25:36Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order
Information [37.70729542263343]
We present a novel adaptive optimization algorithm for large-scale machine learning problems.
Our method dynamically adapts the direction and step-size.
Our methodology does not require a tedious tuning rate tuning.
arXiv Detail & Related papers (2021-09-11T06:39:50Z) - Learning Linearized Assignment Flows for Image Labeling [70.540936204654]
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling.
We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation.
arXiv Detail & Related papers (2021-08-02T13:38:09Z) - Randomized Automatic Differentiation [22.95414996614006]
We develop a general framework and approach for randomized automatic differentiation (RAD)
RAD can allow unbiased estimates to be computed with reduced memory in return for variance.
We show that RAD converges in fewer iterations than using a small batch size for feedforward networks, and in a similar number for recurrent networks.
arXiv Detail & Related papers (2020-07-20T19:03:44Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.