Tight Last-Iterate Convergence of the Extragradient Method for
Constrained Monotone Variational Inequalities
- URL: http://arxiv.org/abs/2204.09228v1
- Date: Wed, 20 Apr 2022 05:12:11 GMT
- Title: Tight Last-Iterate Convergence of the Extragradient Method for
Constrained Monotone Variational Inequalities
- Authors: Yang Cai, Argyris Oikonomou, Weiqiang Zheng
- Abstract summary: We show the last-iterate convergence rate of the extragradient method for monotone and Lipschitz variational inequalities with constraints.
We develop a new approach that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient method.
- Score: 4.6193503399184275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The monotone variational inequality is a central problem in mathematical
programming that unifies and generalizes many important settings such as smooth
convex optimization, two-player zero-sum games, convex-concave saddle point
problems, etc. The extragradient method by Korpelevich [1976] is one of the
most popular methods for solving monotone variational inequalities. Despite its
long history and intensive attention from the optimization and machine learning
community, the following major problem remains open. What is the last-iterate
convergence rate of the extragradient method for monotone and Lipschitz
variational inequalities with constraints? We resolve this open problem by
showing a tight $O\left(\frac{1}{\sqrt{T}}\right)$ last-iterate convergence
rate for arbitrary convex feasible sets, which matches the lower bound by
Golowich et al. [2020]. Our rate is measured in terms of the standard gap
function. The technical core of our result is the monotonicity of a new
performance measure -- the tangent residual, which can be viewed as an
adaptation of the norm of the operator that takes the local constraints into
account. To establish the monotonicity, we develop a new approach that combines
the power of the sum-of-squares programming with the low dimensionality of the
update rule of the extragradient method. We believe our approach has many
additional applications in the analysis of iterative methods.
Related papers
- The inexact power augmented Lagrangian method for constrained nonconvex optimization [44.516958213972885]
This work introduces an unconventional augmented Lagrangian term, where the augmenting term is a Euclidean norm raised to a power.
We show that using lower powers for augmenting term to faster rate, albeit with a slower decrease in residual.
Our results further show that using lower powers for augmenting term to faster rate, albeit with a slower decrease in residual.
arXiv Detail & Related papers (2024-10-26T11:31:56Z) - Strictly Low Rank Constraint Optimization -- An Asymptotically
$\mathcal{O}(\frac{1}{t^2})$ Method [5.770309971945476]
We propose a class of non-text and non-smooth problems with textitrank regularization to promote sparsity in optimal solution.
We show that our algorithms are able to achieve a singular convergence of $Ofrac(t2)$, which is exactly same as Nesterov's optimal convergence for first-order methods on smooth convex problems.
arXiv Detail & Related papers (2023-07-04T16:55:41Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Variance-Reduced Splitting Schemes for Monotone Stochastic Generalized
Equations [0.0]
We consider monotone inclusion problems where the operators may be expectation-valued.
A direct application of splitting schemes is complicated by the need to resolve problems with expectation-valued maps at each step.
We propose an avenue for addressing uncertainty in the mapping: Variance-reduced modified forward-backward splitting scheme.
arXiv Detail & Related papers (2020-08-26T02:33:27Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.