Escaping limit cycles: Global convergence for constrained
nonconvex-nonconcave minimax problems
- URL: http://arxiv.org/abs/2302.09831v1
- Date: Mon, 20 Feb 2023 08:37:08 GMT
- Title: Escaping limit cycles: Global convergence for constrained
nonconvex-nonconcave minimax problems
- Authors: Thomas Pethick, Puya Latafat, Panagiotis Patrinos, Olivier Fercoq,
Volkan Cevher
- Abstract summary: This paper introduces a new extragradient-type algorithm for a class of non-nonconcave minimax problems.
The proposed algorithm is applicable to constrained and regularized problems.
- Score: 46.71914490521821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a new extragradient-type algorithm for a class of
nonconvex-nonconcave minimax problems. It is well-known that finding a local
solution for general minimax problems is computationally intractable. This
observation has recently motivated the study of structures sufficient for
convergence of first order methods in the more general setting of variational
inequalities when the so-called weak Minty variational inequality (MVI) holds.
This problem class captures non-trivial structures as we demonstrate with
examples, for which a large family of existing algorithms provably converge to
limit cycles. Our results require a less restrictive parameter range in the
weak MVI compared to what is previously known, thus extending the applicability
of our scheme. The proposed algorithm is applicable to constrained and
regularized problems, and involves an adaptive stepsize allowing for
potentially larger stepsizes. Our scheme also converges globally even in
settings where the underlying operator exhibits limit cycles.
Related papers
- Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis [2.1756081703276]
We analyze the behavior of a natural variance reduced proximal gradient (VRPG) algorithm for convex optimization under convex constraints.
Our main result is a non-asymptotic guarantee for VRPG algorithm.
We show that our guarantee captures the complexity of the loss function, the variability of the noise, and the geometry of the constraint set.
arXiv Detail & Related papers (2024-03-24T14:45:11Z) - Decentralized gradient descent maximization method for composite
nonconvex strongly-concave minimax problems [7.5573375809946395]
We make the first attempt on solving NCSC minimax problems that can focus on both stationary nonsmooth terms.
Our algorithm is designed based on a novel reformulation of the minimax problem.
arXiv Detail & Related papers (2023-04-05T13:54:43Z) - Decentralized Riemannian Algorithm for Nonconvex Minimax Problems [82.50374560598493]
The minimax algorithms for neural networks have been developed to solve many problems.
In this paper, we propose two types of minimax algorithms.
For the setting, we propose DRSGDA and prove that our method achieves a gradient.
arXiv Detail & Related papers (2023-02-08T01:42:45Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Stability and Generalization of Stochastic Gradient Methods for Minimax
Problems [71.60601421935844]
Many machine learning problems can be formulated as minimax problems such as Generative Adversarial Networks (GANs)
We provide a comprehensive generalization analysis of examples from training gradient methods for minimax problems.
arXiv Detail & Related papers (2021-05-08T22:38:00Z) - Cyclic Coordinate Dual Averaging with Extrapolation [22.234715500748074]
We introduce a new block coordinate method that applies to the general class of variational inequality (VI) problems with monotone operators.
The resulting convergence bounds match the optimal convergence bounds of full gradient methods.
For $m$ coordinate blocks, the resulting gradient Lipschitz constant in our bounds is never larger than a factor $sqrtm$ compared to the traditional Euclidean Lipschitz constant.
arXiv Detail & Related papers (2021-02-26T00:28:58Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Global Convergence and Variance-Reduced Optimization for a Class of
Nonconvex-Nonconcave Minimax Problems [39.13924898972611]
Non minimaxewicz problems appear frequently in emerging machine learning applications generative adversarial networks and adversarial learning.
GDA algorithms with constant size can potentially diverge even in the convex setting.
AGDA algorithm converges globally at a rate that attains a sub rate.
arXiv Detail & Related papers (2020-02-22T04:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.