Accelerated Single-Call Methods for Constrained Min-Max Optimization
- URL: http://arxiv.org/abs/2210.03096v2
- Date: Sun, 14 May 2023 20:11:20 GMT
- Title: Accelerated Single-Call Methods for Constrained Min-Max Optimization
- Authors: Yang Cai, Weiqiang Zheng
- Abstract summary: Existing methods either require two gradient calls or two projections in each iteration, which may costly in some applications.
In this paper, we first show that a variant of the Optimistic Gradient (RGOG) has a rich set of non-comonrate min-max convergence rate problems.
Our convergence rates hold for standard measures such as and the natural and the natural.
- Score: 5.266784779001398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study first-order methods for constrained min-max optimization. Existing
methods either require two gradient calls or two projections in each iteration,
which may be costly in some applications. In this paper, we first show that a
variant of the Optimistic Gradient (OG) method, a single-call single-projection
algorithm, has $O(\frac{1}{\sqrt{T}})$ best-iterate convergence rate for
inclusion problems with operators that satisfy the weak Minty variation
inequality (MVI). Our second result is the first single-call single-projection
algorithm -- the Accelerated Reflected Gradient (ARG) method that achieves the
optimal $O(\frac{1}{T})$ last-iterate convergence rate for inclusion problems
that satisfy negative comonotonicity. Both the weak MVI and negative
comonotonicity are well-studied assumptions and capture a rich set of
non-convex non-concave min-max optimization problems. Finally, we show that the
Reflected Gradient (RG) method, another single-call single-projection
algorithm, has $O(\frac{1}{\sqrt{T}})$ last-iterate convergence rate for
constrained convex-concave min-max optimization, answering an open problem of
[Heish et al, 2019]. Our convergence rates hold for standard measures such as
the tangent residual and the natural residual.
Related papers
- Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Strictly Low Rank Constraint Optimization -- An Asymptotically
$\mathcal{O}(\frac{1}{t^2})$ Method [5.770309971945476]
We propose a class of non-text and non-smooth problems with textitrank regularization to promote sparsity in optimal solution.
We show that our algorithms are able to achieve a singular convergence of $Ofrac(t2)$, which is exactly same as Nesterov's optimal convergence for first-order methods on smooth convex problems.
arXiv Detail & Related papers (2023-07-04T16:55:41Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [116.89941263390769]
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $min_mathbfxmax_mathbfyF(mathbfx) + H(mathbfx,mathbfy)$, where one has access to first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.
We present a emphaccelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragrad
arXiv Detail & Related papers (2022-06-17T06:10:20Z) - Accelerated Algorithms for Constrained Nonconvex-Nonconcave Min-Max Optimization and Comonotone Inclusion [9.551565016483833]
We study comonotone min-max optimization, a structured class of non-nonconcave min-max optimization problems.
In our first contribution, we extend the Extra Anchored Gradient (EAG) algorithm to constrained comonotone min-max optimization.
In our second contribution, we extend the Fast Extra Gradient (FEG) algorithm to constrainednotone min-max optimization.
arXiv Detail & Related papers (2022-06-10T17:44:06Z) - Accelerated SGD for Non-Strongly-Convex Least Squares [14.010916616909743]
We consider approximation for the least squares regression problem in the non-strongly convex setting.
We present the first practical algorithm that achieves the optimal prediction error rates in terms of dependence on the noise of the problem.
arXiv Detail & Related papers (2022-03-03T14:39:33Z) - Stochastic regularized majorization-minimization with weakly convex and
multi-convex surrogates [0.0]
We show that the first optimality gap of the proposed algorithm decays at the rate of the expected loss for various methods under nontens dependent data setting.
We obtain first convergence point for various methods under nontens dependent data setting.
arXiv Detail & Related papers (2022-01-05T15:17:35Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.