SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities
- URL: http://arxiv.org/abs/2210.05994v1
- Date: Wed, 12 Oct 2022 08:04:48 GMT
- Title: SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities
- Authors: Aleksandr Beznosikov, Alexander Gasnikov
- Abstract summary: We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
- Score: 137.6408511310322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational inequalities are a broad formalism that encompasses a vast number
of applications. Motivated by applications in machine learning and beyond,
stochastic methods are of great importance. In this paper we consider the
problem of stochastic finite-sum cocoercive variational inequalities. For this
class of problems, we investigate the convergence of the method based on the
SARAH variance reduction technique. We show that for strongly monotone problems
it is possible to achieve linear convergence to a solution using this method.
Experiments confirm the importance and practical applicability of our approach.
Related papers
- Maximum likelihood inference for high-dimensional problems with multiaffine variable relations [2.4578723416255754]
In this paper, we consider inference problems where the variables are related by multiaffine expressions.
We propose a novel Alternating and Iteratively-Reweighted Least Squares (AIRLS) algorithm, and prove its convergence for problems with Generalized Normal Distributions.
arXiv Detail & Related papers (2024-09-05T13:07:31Z) - A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Primal Methods for Variational Inequality Problems with Functional Constraints [25.261426717550293]
We propose a primal method, termed Constrained Gradient Method (CGM), for addressing functional constrained variational inequality problems.
We establish a non-asymptotic convergence analysis of the algorithm for variational inequality problems with monotone operators under smooth constraints.
Our algorithms match the complexity of projection-based methods in terms of operator queries for both monotone and strongly monotone settings.
arXiv Detail & Related papers (2024-03-19T16:03:03Z) - Variational Annealing on Graphs for Combinatorial Optimization [7.378582040635655]
We show that an autoregressive approach which captures statistical dependencies among solution variables yields superior performance on many popular CO problems.
We introduce subgraph tokenization in which the configuration of a set of solution variables is represented by a single token.
arXiv Detail & Related papers (2023-11-23T18:56:51Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Distributed Stochastic Optimization under a General Variance Condition [13.911633636387059]
Distributed optimization has drawn great attention recently due to its effectiveness in solving largescale machine learning problems.
We revisit the classical Federated Averaging (Avg) and establish the convergence results under only a mild variance for smooth non objective functions.
Almost a stationary convergence point is also established under the gradients condition.
arXiv Detail & Related papers (2023-01-30T05:48:09Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Smooth Monotone Stochastic Variational Inequalities and Saddle Point
Problems: A Survey [119.11852898082967]
This paper is a survey of methods for solving smooth (strongly) monotone variational inequalities.
To begin with, we give the foundation from which the methods eventually evolved.
Then we review methods for the general formulation, and look at the finite sum setup.
arXiv Detail & Related papers (2022-08-29T13:39:30Z) - Compression and Data Similarity: Combination of Two Techniques for
Communication-Efficient Solving of Distributed Variational Inequalities [137.6408511310322]
In this paper we consider a combination of two popular approaches: compression and data similarity.
We show that this synergy can be more effective than each of the approaches separately in solving distributed smooth strongly monotonic variational inequalities.
arXiv Detail & Related papers (2022-06-19T16:38:56Z) - Stochastic Variance Reduction for Variational Inequality Methods [19.061953585686986]
We propose variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions.
Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman.
arXiv Detail & Related papers (2021-02-16T18:39:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.