Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization
- URL: http://arxiv.org/abs/2011.00364v2
- Date: Sat, 27 Feb 2021 22:09:24 GMT
- Title: Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization
- Authors: Jelena Diakonikolas, Constantinos Daskalakis, and Michael I. Jordan
- Abstract summary: We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
- Score: 98.0595480384208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of min-max optimization in adversarial training of deep neural
network classifiers and training of generative adversarial networks has
motivated the study of nonconvex-nonconcave optimization objectives, which
frequently arise in these applications. Unfortunately, recent results have
established that even approximate first-order stationary points of such
objectives are intractable, even under smoothness conditions, motivating the
study of min-max objectives with additional structure. We introduce a new class
of structured nonconvex-nonconcave min-max optimization problems, proposing a
generalization of the extragradient algorithm which provably converges to a
stationary point. The algorithm applies not only to Euclidean spaces, but also
to general $\ell_p$-normed finite-dimensional real vector spaces. We also
discuss its stability under stochastic oracles and provide bounds on its sample
complexity. Our iteration complexity and sample complexity bounds either match
or improve the best known bounds for the same or less general
nonconvex-nonconcave settings, such as those that satisfy variational coherence
or in which a weak solution to the associated variational inequality problem is
assumed to exist.
Related papers
- Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization [34.628967272528044]
This paper investigates projection-free algorithms for constrained multi-level optimization functions.
By using a stage-wise adaptation, we obtain complexities for convex and strongly convex functions.
arXiv Detail & Related papers (2024-06-06T06:56:56Z) - Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for
Solving Nonconvex Min-Max Problems [2.0305676256390934]
We characterize the finite-time performance of the continuous-time variant of simultaneous gradient descent-ascent algorithm.
Our results on the behavior of continuous-time algorithm may be used to enhance the convergence properties of its discrete-time counterpart.
arXiv Detail & Related papers (2021-12-17T15:51:04Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.