FOSI: Hybrid First and Second Order Optimization
- URL: http://arxiv.org/abs/2302.08484v4
- Date: Thu, 7 Mar 2024 15:37:44 GMT
- Title: FOSI: Hybrid First and Second Order Optimization
- Authors: Hadar Sivan, Moshe Gabel, Assaf Schuster
- Abstract summary: We present FOSI, a novel metaalgorithm that improves the performance of any base first-order by efficiently incorporating second-order information during the optimization process.
Our empirical evaluation demonstrates that FOSI improves the convergence rate and optimization time of first-order methods such as Heavy-Ball and Adam, and outperforms second-order methods (K-FAC and L-BFGS)
- Score: 11.447526245792154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Popular machine learning approaches forgo second-order information due to the
difficulty of computing curvature in high dimensions. We present FOSI, a novel
meta-algorithm that improves the performance of any base first-order optimizer
by efficiently incorporating second-order information during the optimization
process. In each iteration, FOSI implicitly splits the function into two
quadratic functions defined on orthogonal subspaces, then uses a second-order
method to minimize the first, and the base optimizer to minimize the other. We
formally analyze FOSI's convergence and the conditions under which it improves
a base optimizer. Our empirical evaluation demonstrates that FOSI improves the
convergence rate and optimization time of first-order methods such as
Heavy-Ball and Adam, and outperforms second-order methods (K-FAC and L-BFGS).
Related papers
- Fast Two-Time-Scale Stochastic Gradient Method with Applications in Reinforcement Learning [5.325297567945828]
We propose a new method for two-time-scale optimization that achieves significantly faster convergence than the prior arts.
We characterize the proposed algorithm under various conditions and show how it specializes on online sample-based methods.
arXiv Detail & Related papers (2024-05-15T19:03:08Z) - SGD with Partial Hessian for Deep Neural Networks Optimization [18.78728272603732]
We propose a compound, which is a combination of a second-order with a precise partial Hessian matrix for updating channel-wise parameters and the first-order gradient descent (SGD) algorithms for updating the other parameters.
Compared with first-orders, it adopts a certain amount of information from the Hessian matrix to assist optimization, while compared with the existing second-order generalizations, it keeps the good performance of first-order generalizations imprecise.
arXiv Detail & Related papers (2024-03-05T06:10:21Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence [31.59453616577858]
We show that a distributed system can withstand noisier zeroth-order agents but can even benefit from such agents into the optimization process.
Our results hold both convex and non-zero-th order optimization objectives while they could still contribute to joint optimization tasks.
arXiv Detail & Related papers (2022-10-14T10:54:11Z) - Adaptive First- and Second-Order Algorithms for Large-Scale Machine
Learning [3.0204520109309843]
We consider first- and second-order techniques to address continuous optimization problems in machine learning.
In the first-order case, we propose a framework of transition from semi-deterministic to quadratic regularization methods.
In the second-order case, we propose a novel first-order algorithm with adaptive sampling and adaptive step size.
arXiv Detail & Related papers (2021-11-29T18:10:00Z) - On Second-order Optimization Methods for Federated Learning [59.787198516188425]
We evaluate the performance of several second-order distributed methods with local steps in the federated learning setting.
We propose a novel variant that uses second-order local information for updates and a global line search to counteract the resulting local specificity.
arXiv Detail & Related papers (2021-09-06T12:04:08Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Primal-Dual Sequential Subspace Optimization for Saddle-point Problems [3.9582154141918964]
We introduce a new sequential subspace optimization method for large-scale saddle-point problems.
It solves auxiliary saddle-point problems in low-dimensional subspaces, spanned by directions derived from first-order information over the primal emphand dual variables.
Experimental results demonstrate significantly better convergence relative to popular first-order methods.
arXiv Detail & Related papers (2020-08-20T18:19:19Z) - A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning [95.85269649177336]
ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
arXiv Detail & Related papers (2020-06-11T06:50:35Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.