Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class
of Nonconvex-Nonconcave Minimax Problems
- URL: http://arxiv.org/abs/2211.13668v2
- Date: Sat, 27 May 2023 02:50:40 GMT
- Title: Zeroth-Order Alternating Gradient Descent Ascent Algorithms for a Class
of Nonconvex-Nonconcave Minimax Problems
- Authors: Zi Xu, Zi-Qi Wang, Jun-Lin Wang, Yu-Hong Dai
- Abstract summary: We propose a zeroth-order alternating gradient descent ascent (ZO-AGDA) algorithm for solving the NC-PL minimax problem.
To the best of our knowledge, they are the first two zeroth-order algorithms with the complexity gurantee for solving NC-PL minimax problems.
- Score: 9.792800635198935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider a class of nonconvex-nonconcave minimax problems,
i.e., NC-PL minimax problems, whose objective functions satisfy the Polyak-\L
ojasiewicz (PL) condition with respect to the inner variable. We propose a
zeroth-order alternating gradient descent ascent (ZO-AGDA) algorithm and a
zeroth-order variance reduced alternating gradient descent ascent (ZO-VRAGDA)
algorithm for solving NC-PL minimax problem under the deterministic and the
stochastic setting, respectively. The total number of function value queries to
obtain an $\epsilon$-stationary point of ZO-AGDA and ZO-VRAGDA algorithm for
solving NC-PL minimax problem is upper bounded by
$\mathcal{O}(\varepsilon^{-2})$ and $\mathcal{O}(\varepsilon^{-3})$,
respectively. To the best of our knowledge, they are the first two zeroth-order
algorithms with the iteration complexity gurantee for solving NC-PL minimax
problems.
Related papers
- Gradient Norm Regularization Second-Order Algorithms for Solving Nonconvex-Strongly Concave Minimax Problems [2.3721580767877257]
We propose an algorithm for solving non-strongly concave minimax problems.
We show that the proposed algorithm achieves an $mathcal.
stilon.
$second-order norm is proved to be upper by.
$tk.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k
arXiv Detail & Related papers (2024-11-24T09:46:36Z) - Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Zeroth-Order primal-dual Alternating Projection Gradient Algorithms for
Nonconvex Minimax Problems with Coupled linear Constraints [3.898056433020865]
We propose two zero-order regular complexity algorithms for non minimax problems with linear constraints.
To the best of our knowledge, they are first two zero-order algorithms with best for noncal complexity.
arXiv Detail & Related papers (2024-01-26T11:22:13Z) - Solving a Class of Non-Convex Minimax Optimization in Federated Learning [84.98927714326908]
The minimax problems arise throughout machine learning applications, ranging from machine learning training to large-scale learning.
We propose a class of algorithms for non minimax problems (emphi) that reduce complexity to $varepsilon-6)$.
We prove that FedSGDA-M has the best sample complexity of $O(kappa2-3)$ and the best-known communication of $O(kappa2-3)$.
arXiv Detail & Related papers (2023-10-05T15:48:41Z) - Decentralized Riemannian Algorithm for Nonconvex Minimax Problems [82.50374560598493]
The minimax algorithms for neural networks have been developed to solve many problems.
In this paper, we propose two types of minimax algorithms.
For the setting, we propose DRSGDA and prove that our method achieves a gradient.
arXiv Detail & Related papers (2023-02-08T01:42:45Z) - Primal Dual Alternating Proximal Gradient Algorithms for Nonsmooth Nonconvex Minimax Problems with Coupled Linear Constraints [4.70696854954921]
Non proximal minimax problems have attracted wide attention in machine learning, signal processing many other fields in recent years.
We propose DAP algorithm for solving nonsmooth non-strongly concave minimax problems.
arXiv Detail & Related papers (2022-12-09T05:32:30Z) - Adaptive Federated Minimax Optimization with Lower Complexities [82.51223883622552]
We propose an efficient adaptive minimax optimization algorithm (i.e., AdaFGDA) to solve these minimax problems.
It builds our momentum-based reduced and localSGD techniques, and it flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-14T12:32:18Z) - Faster Perturbed Stochastic Gradient Methods for Finding Local Minima [92.99933928528797]
We propose tttPullback, a faster perturbed gradient framework for finding local minima.
We show that Pullback with gradient estimators such as SARAH/SP and STORM can find $(epsilon, epsilon_H)$approximate local minima within $tilde O(epsilon-3 + H-6)$.
The core idea of our framework is a step-size pullback'' scheme to control the average movement of the gradient evaluations.
arXiv Detail & Related papers (2021-10-25T07:20:05Z) - Derivative-free Alternating Projection Algorithms for General
Nonconvex-Concave Minimax Problems [9.173866646584031]
In this paper, we propose an algorithm for nonsmooth zeroth-order minimax problems.
We show that it can be used to attack nonconcave minimax problems.
arXiv Detail & Related papers (2021-08-01T15:23:49Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.