Extragradient Method: $O(1/K)$ Last-Iterate Convergence for Monotone
Variational Inequalities and Connections With Cocoercivity
- URL: http://arxiv.org/abs/2110.04261v1
- Date: Fri, 8 Oct 2021 17:16:12 GMT
- Title: Extragradient Method: $O(1/K)$ Last-Iterate Convergence for Monotone
Variational Inequalities and Connections With Cocoercivity
- Authors: Eduard Gorbunov, Nicolas Loizou, Gauthier Gidel
- Abstract summary: Extragradient method (EG) Korpelevich is one of the most popular methods for solving saddle point and variational inequalities problems (VIP)
Despite its long history and significant attention in the optimization community, there remain important open questions about convergence of EG.
- Score: 21.22292220954549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extragradient method (EG) Korpelevich [1976] is one of the most popular
methods for solving saddle point and variational inequalities problems (VIP).
Despite its long history and significant attention in the optimization
community, there remain important open questions about convergence of EG. In
this paper, we resolve one of such questions and derive the first last-iterate
$O(1/K)$ convergence rate for EG for monotone and Lipschitz VIP without any
additional assumptions on the operator. The rate is given in terms of reducing
the squared norm of the operator. Moreover, we establish several results on the
(non-)cocoercivity of the update operators of EG, Optimistic Gradient Method,
and Hamiltonian Gradient Method, when the original operator is monotone and
Lipschitz.
Related papers
- Extragradient Method for $(L_0, L_1)$-Lipschitz Root-finding Problems [15.649189456012811]
The extragradient method (EG) has become a cornerstone technique for solving min-max optimization, root-finding problems, and variational inequalities (VIs)<n>We propose a novel step size strategy for EG to solve root-finding problems and establish sublinear convergence rates for monotone operators and linear convergence rates for strongly monotone operators.
arXiv Detail & Related papers (2025-10-25T19:43:01Z) - Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - First-order methods for Stochastic Variational Inequality problems with Function Constraints [3.922842284927803]
This paper presents novel first-order methods for the function-constrained Variational Inequality (FCVI) problem in smooth or nonsmooth settings.
All our algorithms easily extend to saddle point problems with function constraints that couple the primal and dual variables while maintaining the same complexity results.
arXiv Detail & Related papers (2023-04-10T17:07:55Z) - Extragradient-Type Methods with $\mathcal{O} (1/k)$ Last-Iterate
Convergence Rates for Co-Hypomonotone Inclusions [8.0153031008486]
We develop two "Nesterov's accelerated" variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion.
Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems.
arXiv Detail & Related papers (2023-02-08T14:47:34Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - Gradient-Free Methods for Deterministic and Stochastic Nonsmooth
Nonconvex Optimization [94.19177623349947]
Non-smooth non optimization problems emerge in machine learning and business making.
Two core challenges impede the development of efficient methods with finitetime convergence guarantee.
Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results.
arXiv Detail & Related papers (2022-09-12T06:53:24Z) - Solving Constrained Variational Inequalities via an Interior Point
Method [88.39091990656107]
We develop an interior-point approach to solve constrained variational inequality (cVI) problems.
We provide convergence guarantees for ACVI in two general classes of problems.
Unlike previous work in this setting, ACVI provides a means to solve cVIs when the constraints are nontrivial.
arXiv Detail & Related papers (2022-06-21T17:55:13Z) - Tight Last-Iterate Convergence of the Extragradient Method for
Constrained Monotone Variational Inequalities [4.6193503399184275]
We show the last-iterate convergence rate of the extragradient method for monotone and Lipschitz variational inequalities with constraints.
We develop a new approach that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient method.
arXiv Detail & Related papers (2022-04-20T05:12:11Z) - Gradient-Free Methods for Saddle-Point Problem [125.99533416395765]
We generalize the approach Gasnikov et. al., 2017, which allows to solve (stochastic) convex optimization problems with an inexact gradient-free oracle.
Our approach reduces $fracnlog n$ times the required number of oracle calls.
In the second part of the paper, we analyze the case when such an assumption cannot be made, we propose a general approach on how to modernize the method to solve this problem.
arXiv Detail & Related papers (2020-05-12T16:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.