Stochastic Variance Reduction for Variational Inequality Methods
- URL: http://arxiv.org/abs/2102.08352v1
- Date: Tue, 16 Feb 2021 18:39:16 GMT
- Title: Stochastic Variance Reduction for Variational Inequality Methods
- Authors: Ahmet Alacaoglu, Yura Malitsky
- Abstract summary: We propose variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions.
Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman.
- Score: 19.061953585686986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose stochastic variance reduced algorithms for solving convex-concave
saddle point problems, monotone variational inequalities, and monotone
inclusions. Our framework applies to extragradient, forward-backward-forward,
and forward-reflected-backward methods both in Euclidean and Bregman setups.
All proposed methods converge in exactly the same setting as their
deterministic counterparts and they either match or improve the best-known
complexities for solving structured min-max problems. Our results reinforce the
correspondence between variance reduction in variational inequalities and
minimization. We also illustrate the improvements of our approach with
numerical evaluations on matrix games.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - An Inexact Halpern Iteration with Application to Distributionally Robust
Optimization [9.529117276663431]
We investigate the inexact variants of the scheme in both deterministic and deterministic convergence settings.
We show that by choosing the inexactness appropriately, the inexact schemes admit an $O(k-1) convergence rate in terms of the (expected) residue norm.
arXiv Detail & Related papers (2024-02-08T20:12:47Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - Smooth Monotone Stochastic Variational Inequalities and Saddle Point
Problems: A Survey [119.11852898082967]
This paper is a survey of methods for solving smooth (strongly) monotone variational inequalities.
To begin with, we give the foundation from which the methods eventually evolved.
Then we review methods for the general formulation, and look at the finite sum setup.
arXiv Detail & Related papers (2022-08-29T13:39:30Z) - A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction [2.048226951354646]
We develop an implementable proximal point (SPP) method for a class of weakly convex, composite optimization problems.
The proposed algorithm incorporates a variance reduction mechanism and the resulting updates are solved using an inexact semismooth Newton framework.
arXiv Detail & Related papers (2022-04-01T13:08:49Z) - A unified algorithm framework for mean-variance optimization in
discounted Markov decision processes [7.510742715895749]
This paper studies the risk-averse mean-variance optimization in infinite-horizon discounted Markov decision processes (MDPs)
We introduce a pseudo mean to transform the untreatable MDP to a standard one with a redefined reward function in standard form.
We propose a unified algorithm framework with a bilevel optimization structure for the discounted mean-variance optimization.
arXiv Detail & Related papers (2022-01-15T02:19:56Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Variance-Reduced Splitting Schemes for Monotone Stochastic Generalized
Equations [0.0]
We consider monotone inclusion problems where the operators may be expectation-valued.
A direct application of splitting schemes is complicated by the need to resolve problems with expectation-valued maps at each step.
We propose an avenue for addressing uncertainty in the mapping: Variance-reduced modified forward-backward splitting scheme.
arXiv Detail & Related papers (2020-08-26T02:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.