Fixed-Time Convergence for a Class of Nonconvex-Nonconcave Min-Max
Problems
- URL: http://arxiv.org/abs/2207.12845v1
- Date: Tue, 26 Jul 2022 12:25:05 GMT
- Title: Fixed-Time Convergence for a Class of Nonconvex-Nonconcave Min-Max
Problems
- Authors: Kunal Garg and Mayank Baranwal
- Abstract summary: We develop a fixed-time convergent saddle point dynamical system for solving min-max problems.
The proposed method achieves fast compared to any other state method.
- Score: 5.787117733071416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study develops a fixed-time convergent saddle point dynamical system for
solving min-max problems under a relaxation of standard convexity-concavity
assumption. In particular, it is shown that by leveraging the dynamical systems
viewpoint of an optimization algorithm, accelerated convergence to a saddle
point can be obtained. Instead of requiring the objective function to be
strongly-convex--strongly-concave (as necessitated for accelerated convergence
of several saddle-point algorithms), uniform fixed-time convergence is
guaranteed for functions satisfying only the two-sided Polyak-{\L}ojasiewicz
(PL) inequality. A large number of practical problems, including the robust
least squares estimation, are known to satisfy the two-sided PL inequality. The
proposed method achieves arbitrarily fast convergence compared to any other
state-of-the-art method with linear or even super-linear convergence, as also
corroborated in numerical case studies.
Related papers
- The inexact power augmented Lagrangian method for constrained nonconvex optimization [44.516958213972885]
This work introduces an unconventional augmented Lagrangian term, where the augmenting term is a Euclidean norm raised to a power.
We show that using lower powers for augmenting term to faster rate, albeit with a slower decrease in residual.
Our results further show that using lower powers for augmenting term to faster rate, albeit with a slower decrease in residual.
arXiv Detail & Related papers (2024-10-26T11:31:56Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
Evasion of Non-Degenerate Saddle Points [8.452349885923507]
Gradient-based first-order convex optimization algorithms find widespread applicability in a variety of domains, including machine learning tasks.
Motivated by the recent advances in fixed-time theory of optimal time, we introduce a framework for designing accelerated optimization algorithms.
For functions that admit non-de saddle-points, we show that the time required to evade these saddle-points is uniformly bounded for all initial conditions.
arXiv Detail & Related papers (2022-12-07T16:36:23Z) - On the Complexity of a Practical Primal-Dual Coordinate Method [63.899427212054995]
We prove complexity bounds for primal-dual algorithm with random and coordinate descent (PURE-CD)
It has been shown to obtain good extrapolation for solving bi-max performance problems.
arXiv Detail & Related papers (2022-01-19T16:14:27Z) - Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for
Solving Nonconvex Min-Max Problems [2.0305676256390934]
We characterize the finite-time performance of the continuous-time variant of simultaneous gradient descent-ascent algorithm.
Our results on the behavior of continuous-time algorithm may be used to enhance the convergence properties of its discrete-time counterpart.
arXiv Detail & Related papers (2021-12-17T15:51:04Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.