A Unified Single-loop Alternating Gradient Projection Algorithm for
Nonconvex-Concave and Convex-Nonconcave Minimax Problems
- URL: http://arxiv.org/abs/2006.02032v3
- Date: Tue, 19 Jul 2022 16:22:28 GMT
- Title: A Unified Single-loop Alternating Gradient Projection Algorithm for
Nonconvex-Concave and Convex-Nonconcave Minimax Problems
- Authors: Zi Xu, Huiling Zhang, Yang Xu and Guanghui Lan
- Abstract summary: We develop an efficient algorithm for solving minimax problems with theoretical general convexnon objective guarantees.
We show that the proposed algorithm can be used to solve both noncaveepsilon concave (strongly) and (strongly) convexnonconcave minimax problems.
- Score: 8.797831153231664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much recent research effort has been directed to the development of efficient
algorithms for solving minimax problems with theoretical convergence guarantees
due to the relevance of these problems to a few emergent applications. In this
paper, we propose a unified single-loop alternating gradient projection (AGP)
algorithm for solving smooth nonconvex-(strongly) concave and (strongly)
convex-nonconcave minimax problems. AGP employs simple gradient projection
steps for updating the primal and dual variables alternatively at each
iteration. We show that it can find an $\varepsilon$-stationary point of the
objective function in $\mathcal{O}\left( \varepsilon ^{-2} \right)$ (resp.
$\mathcal{O}\left( \varepsilon ^{-4} \right)$) iterations under
nonconvex-strongly concave (resp. nonconvex-concave) setting. Moreover, its
gradient complexity to obtain an $\varepsilon$-stationary point of the
objective function is bounded by $\mathcal{O}\left( \varepsilon ^{-2} \right)$
(resp., $\mathcal{O}\left( \varepsilon ^{-4} \right)$) under the strongly
convex-nonconcave (resp., convex-nonconcave) setting. To the best of our
knowledge, this is the first time that a simple and unified single-loop
algorithm is developed for solving both nonconvex-(strongly) concave and
(strongly) convex-nonconcave minimax problems. Moreover, the complexity results
for solving the latter (strongly) convex-nonconcave minimax problems have never
been obtained before in the literature. Numerical results show the efficiency
of the proposed AGP algorithm. Furthermore, we extend the AGP algorithm by
presenting a block alternating proximal gradient (BAPG) algorithm for solving
more general multi-block nonsmooth nonconvex-(strongly) concave and (strongly)
convex-nonconcave minimax problems. We can similarly establish the gradient
complexity of the proposed algorithm under these four different settings.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization [77.3396841985172]
We provide a unified analysis of two-timescale gradient ascent (TTGDA) for solving structured non minimax optimization problems.
Our contribution is to design TTGDA algorithms are effective beyond the setting.
arXiv Detail & Related papers (2024-08-21T20:14:54Z) - Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions [41.43895948769255]
We show a class of non-smooth non-asymptotic fairness problems in the form of $min_x[yin Yphi(x, y) - max_zin Zpsix(x, z)]$.
We propose an envelope approximate gradient SMAG, the first method for solving these problems, provide a state-of-the-art non-asymptotic convergence rate.
arXiv Detail & Related papers (2024-05-28T20:52:46Z) - Zeroth-Order primal-dual Alternating Projection Gradient Algorithms for
Nonconvex Minimax Problems with Coupled linear Constraints [3.898056433020865]
We propose two zero-order regular complexity algorithms for non minimax problems with linear constraints.
To the best of our knowledge, they are first two zero-order algorithms with best for noncal complexity.
arXiv Detail & Related papers (2024-01-26T11:22:13Z) - Decentralized Riemannian Algorithm for Nonconvex Minimax Problems [82.50374560598493]
The minimax algorithms for neural networks have been developed to solve many problems.
In this paper, we propose two types of minimax algorithms.
For the setting, we propose DRSGDA and prove that our method achieves a gradient.
arXiv Detail & Related papers (2023-02-08T01:42:45Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Primal Dual Alternating Proximal Gradient Algorithms for Nonsmooth Nonconvex Minimax Problems with Coupled Linear Constraints [4.70696854954921]
Non proximal minimax problems have attracted wide attention in machine learning, signal processing many other fields in recent years.
We propose DAP algorithm for solving nonsmooth non-strongly concave minimax problems.
arXiv Detail & Related papers (2022-12-09T05:32:30Z) - A first-order primal-dual method with adaptivity to local smoothness [64.62056765216386]
We consider the problem of finding a saddle point for the convex-concave objective $min_x max_y f(x) + langle Ax, yrangle - g*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth.
We propose an adaptive version of the Condat-Vu algorithm, which alternates between primal gradient steps and dual steps.
arXiv Detail & Related papers (2021-10-28T14:19:30Z) - Derivative-free Alternating Projection Algorithms for General
Nonconvex-Concave Minimax Problems [9.173866646584031]
In this paper, we propose an algorithm for nonsmooth zeroth-order minimax problems.
We show that it can be used to attack nonconcave minimax problems.
arXiv Detail & Related papers (2021-08-01T15:23:49Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.