Zeroth-Order primal-dual Alternating Projection Gradient Algorithms for
Nonconvex Minimax Problems with Coupled linear Constraints
- URL: http://arxiv.org/abs/2402.03352v1
- Date: Fri, 26 Jan 2024 11:22:13 GMT
- Title: Zeroth-Order primal-dual Alternating Projection Gradient Algorithms for
Nonconvex Minimax Problems with Coupled linear Constraints
- Authors: Huiling Zhang, Zi Xu, Yuhong Dai
- Abstract summary: We propose two zero-order regular complexity algorithms for non minimax problems with linear constraints.
To the best of our knowledge, they are first two zero-order algorithms with best for noncal complexity.
- Score: 3.898056433020865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study zeroth-order algorithms for nonconvex minimax
problems with coupled linear constraints under the deterministic and stochastic
settings, which have attracted wide attention in machine learning, signal
processing and many other fields in recent years, e.g., adversarial attacks in
resource allocation problems and network flow problems etc. We propose two
single-loop algorithms, namely the zero-order primal-dual alternating projected
gradient (ZO-PDAPG) algorithm and the zero-order regularized momentum
primal-dual projected gradient algorithm (ZO-RMPDPG), for solving deterministic
and stochastic nonconvex-(strongly) concave minimax problems with coupled
linear constraints. The iteration complexity of the two proposed algorithms to
obtain an $\varepsilon$-stationary point are proved to be
$\mathcal{O}(\varepsilon ^{-2})$ (resp. $\mathcal{O}(\varepsilon ^{-4})$) for
solving nonconvex-strongly concave (resp. nonconvex-concave) minimax problems
with coupled linear constraints under deterministic settings and
$\tilde{\mathcal{O}}(\varepsilon ^{-3})$ (resp.
$\tilde{\mathcal{O}}(\varepsilon ^{-6.5})$) under stochastic settings
respectively. To the best of our knowledge, they are the first two zeroth-order
algorithms with iterative complexity guarantees for solving
nonconvex-(strongly) concave minimax problems with coupled linear constraints
under the deterministic and stochastic settings.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - An accelerated first-order regularized momentum descent ascent algorithm for stochastic nonconvex-concave minimax problems [0.4910937238451484]
We have an accelerated regularized momentum descent ascent algorithm (FORMDA) for solving non-concave mini problems.
The best complexity for bound algorithms to solve non-concave minimax problems under the station of objectivearity function.
arXiv Detail & Related papers (2023-10-24T01:45:11Z) - Decentralized Riemannian Algorithm for Nonconvex Minimax Problems [82.50374560598493]
The minimax algorithms for neural networks have been developed to solve many problems.
In this paper, we propose two types of minimax algorithms.
For the setting, we propose DRSGDA and prove that our method achieves a gradient.
arXiv Detail & Related papers (2023-02-08T01:42:45Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Primal Dual Alternating Proximal Gradient Algorithms for Nonsmooth Nonconvex Minimax Problems with Coupled Linear Constraints [4.70696854954921]
Non proximal minimax problems have attracted wide attention in machine learning, signal processing many other fields in recent years.
We propose DAP algorithm for solving nonsmooth non-strongly concave minimax problems.
arXiv Detail & Related papers (2022-12-09T05:32:30Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - A Projection-free Algorithm for Constrained Stochastic Multi-level
Composition Optimization [12.096252285460814]
We propose a projection-free conditional gradient-type algorithm for composition optimization.
We show that the number of oracles and the linear-minimization oracle required by the proposed algorithm, are of order $mathcalO_T(epsilon-2)$ and $mathcalO_T(epsilon-3)$ respectively.
arXiv Detail & Related papers (2022-02-09T06:05:38Z) - Derivative-free Alternating Projection Algorithms for General
Nonconvex-Concave Minimax Problems [9.173866646584031]
In this paper, we propose an algorithm for nonsmooth zeroth-order minimax problems.
We show that it can be used to attack nonconcave minimax problems.
arXiv Detail & Related papers (2021-08-01T15:23:49Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - A Unified Single-loop Alternating Gradient Projection Algorithm for
Nonconvex-Concave and Convex-Nonconcave Minimax Problems [8.797831153231664]
We develop an efficient algorithm for solving minimax problems with theoretical general convexnon objective guarantees.
We show that the proposed algorithm can be used to solve both noncaveepsilon concave (strongly) and (strongly) convexnonconcave minimax problems.
arXiv Detail & Related papers (2020-06-03T04:00:52Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.