Smooth Monotone Stochastic Variational Inequalities and Saddle Point
Problems: A Survey
- URL: http://arxiv.org/abs/2208.13592v3
- Date: Sun, 2 Apr 2023 12:35:02 GMT
- Title: Smooth Monotone Stochastic Variational Inequalities and Saddle Point
Problems: A Survey
- Authors: Aleksandr Beznosikov, Boris Polyak, Eduard Gorbunov, Dmitry Kovalev,
Alexander Gasnikov
- Abstract summary: This paper is a survey of methods for solving smooth (strongly) monotone variational inequalities.
To begin with, we give the foundation from which the methods eventually evolved.
Then we review methods for the general formulation, and look at the finite sum setup.
- Score: 119.11852898082967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is a survey of methods for solving smooth (strongly) monotone
stochastic variational inequalities. To begin with, we give the deterministic
foundation from which the stochastic methods eventually evolved. Then we review
methods for the general stochastic formulation, and look at the finite sum
setup. The last parts of the paper are devoted to various recent (not
necessarily stochastic) advances in algorithms for variational inequalities.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Asymptotic normality and optimality in nonsmooth stochastic
approximation [8.805688232946471]
In their seminal work, Polyak and Juditsky showed that approximation for solving smooth equations enjoy a central limit.
A long-standing open question in this line of work is whether similar guarantees hold for important non-smooth problems.
arXiv Detail & Related papers (2023-01-16T23:17:47Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - Stochastic Variance Reduction for Variational Inequality Methods [19.061953585686986]
We propose variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions.
Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman.
arXiv Detail & Related papers (2021-02-16T18:39:16Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.