Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on
Classical and Recent Developments
- URL: http://arxiv.org/abs/2303.17192v1
- Date: Thu, 30 Mar 2023 07:04:22 GMT
- Title: Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on
Classical and Recent Developments
- Authors: Quoc Tran-Dinh
- Abstract summary: The extragradient (EG) is a well-known method to approximate solutions of saddle-point problems.
Recently, these methods have gained popularity due to new applications in machine learning and robust optimization.
We provide a unified convergence analysis for different classes of algorithms, with an emphasis on sublinear best-iterate and last-iterate convergence rates.
- Score: 12.995632804090198
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The extragradient (EG), introduced by G. M. Korpelevich in 1976, is a
well-known method to approximate solutions of saddle-point problems and their
extensions such as variational inequalities and monotone inclusions. Over the
years, numerous variants of EG have been proposed and studied in the
literature. Recently, these methods have gained popularity due to new
applications in machine learning and robust optimization. In this work, we
survey the latest developments in the EG method and its variants for
approximating solutions of nonlinear equations and inclusions, with a focus on
the monotonicity and co-hypomonotonicity settings. We provide a unified
convergence analysis for different classes of algorithms, with an emphasis on
sublinear best-iterate and last-iterate convergence rates. We also discuss
recent accelerated variants of EG based on both Halpern fixed-point iteration
and Nesterov's accelerated techniques. Our approach uses simple arguments and
basic mathematical tools to make the proofs as elementary as possible, while
maintaining generality to cover a broad range of problems.
Related papers
- Revisiting Extragradient-Type Methods -- Part 1: Generalizations and Sublinear Convergence Rates [6.78476672849813]
This paper presents a comprehensive analysis of the well-known extragradient (EG) method for solving both equations and inclusions.
We analyze both sublinear best-iterate'' and last-iterate'' convergence rates for the entire class of algorithms.
We extend our EG framework above to monotone'' inclusions, introducing a new class of algorithms and its corresponding convergence results.
arXiv Detail & Related papers (2024-09-25T12:14:05Z) - Multiple Greedy Quasi-Newton Methods for Saddle Point Problems [0.0]
We introduce the Multiple Greedysi-SP (MGSR1-SP) method to solve Hessian point problems.
We show that our method significantly improves both stability and efficiency.
Results affirm MGSR1-SP performance across a broad spectrum of machine learning applications.
arXiv Detail & Related papers (2024-08-01T02:40:37Z) - High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise [96.80184504268593]
gradient, clipping is one of the key algorithmic ingredients to derive good high-probability guarantees.
Clipping can spoil the convergence of the popular methods for composite and distributed optimization.
arXiv Detail & Related papers (2023-10-03T07:49:17Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - An Adaptive Incremental Gradient Method With Support for Non-Euclidean
Norms [19.41328109094503]
We propose and analyze several novel adaptive variants of the popular SAGA algorithm.
We establish its convergence guarantees under general settings.
We improve the analysis of SAGA to support non-Euclidean norms.
arXiv Detail & Related papers (2022-04-28T09:43:07Z) - Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
Methods [73.35353358543507]
Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-max optimization and variational inequalities problems (VIP)
In this paper, we propose a unified convergence analysis that covers a large variety of descent-ascent methods.
We develop several new variants of SGDA such as a new variance-reduced method (L-SVRGDA), new distributed methods with compression (QSGDA, DIANA-SGDA, VR-DIANA-SGDA), and a new method with coordinate randomization (SEGA-SGDA)
arXiv Detail & Related papers (2022-02-15T09:17:39Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - A Discrete Variational Derivation of Accelerated Methods in Optimization [68.8204255655161]
We introduce variational which allow us to derive different methods for optimization.
We derive two families of optimization methods in one-to-one correspondence.
The preservation of symplecticity of autonomous systems occurs here solely on the fibers.
arXiv Detail & Related papers (2021-06-04T20:21:53Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.