Efficiently Escaping Saddle Points in Bilevel Optimization
- URL: http://arxiv.org/abs/2202.03684v2
- Date: Wed, 10 May 2023 05:03:53 GMT
- Title: Efficiently Escaping Saddle Points in Bilevel Optimization
- Authors: Minhui Huang, Xuxing Chen, Kaiyi Ji, Shiqian Ma and Lifeng Lai
- Abstract summary: Bilevel optimization is one of the problems in machine learning.
Recent developments in bilevel optimization converge on the first fundamental nonaptature multi-step analysis.
- Score: 48.925688192913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bilevel optimization is one of the fundamental problems in machine learning
and optimization. Recent theoretical developments in bilevel optimization focus
on finding the first-order stationary points for nonconvex-strongly-convex
cases. In this paper, we analyze algorithms that can escape saddle points in
nonconvex-strongly-convex bilevel optimization. Specifically, we show that the
perturbed approximate implicit differentiation (AID) with a warm start strategy
finds $\epsilon$-approximate local minimum of bilevel optimization in
$\tilde{O}(\epsilon^{-2})$ iterations with high probability. Moreover, we
propose an inexact NEgative-curvature-Originated-from-Noise Algorithm (iNEON),
a pure first-order algorithm that can escape saddle point and find local
minimum of stochastic bilevel optimization. As a by-product, we provide the
first nonasymptotic analysis of perturbed multi-step gradient descent ascent
(GDmax) algorithm that converges to local minimax point for minimax problems.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Adaptive Mirror Descent Bilevel Optimization [25.438298531555468]
We propose a class of efficient adaptive bilevel methods based on mirror descent for non bifraclevel optimization.
We provide an analysis for methods under some conditions, and prove that our methods have a fast number of iterations.
arXiv Detail & Related papers (2023-11-08T08:17:09Z) - On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis [18.08351275534193]
Bilevel optimization reveals the inner structure of otherwise oblique optimization problems.
A common goal in bilevel optimization is to a hyper-objective that implicitly depends on the solution of the set of parameters.
arXiv Detail & Related papers (2023-01-02T15:09:12Z) - Near-Optimal Algorithms for Making the Gradient Small in Stochastic
Minimax Optimization [14.719077076351377]
We study the problem of finding a near-stationary point for smooth minimax optimization.
We show that a novel algorithm called Recursive IteratioNRAIN achieves both convex-concave and strongly-concave cases.
arXiv Detail & Related papers (2022-08-11T16:55:26Z) - A Fast and Convergent Proximal Algorithm for Regularized Nonconvex and
Nonsmooth Bi-level Optimization [26.68351521813062]
Existing bi-level algorithms cannot handle nonsmooth or hyper-smooth regularizers.
In this paper, we show that an implicit differentiation (AID) scheme can be used to speed up comprehensive machine learning applications.
arXiv Detail & Related papers (2022-03-30T18:53:04Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.