Optimal Algorithms for Decentralized Stochastic Variational Inequalities
- URL: http://arxiv.org/abs/2202.02771v2
- Date: Sun, 2 Apr 2023 11:46:07 GMT
- Title: Optimal Algorithms for Decentralized Stochastic Variational Inequalities
- Authors: Dmitry Kovalev, Aleksandr Beznosikov, Abdurakhmon Sadiev, Michael
Persiianov, Peter Richt\'arik, Alexander Gasnikov
- Abstract summary: This work concentrates on the decentralized setting, which is increasingly important but not well understood.
We present lower bounds for both communication and local iterations and construct optimal algorithms that match these lower bounds.
Our algorithms are the best among the available not only in the decentralized case, but also in the deterministic and non-distributed literature.
- Score: 113.43047601775453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational inequalities are a formalism that includes games, minimization,
saddle point, and equilibrium problems as special cases. Methods for
variational inequalities are therefore universal approaches for many applied
tasks, including machine learning problems. This work concentrates on the
decentralized setting, which is increasingly important but not well understood.
In particular, we consider decentralized stochastic (sum-type) variational
inequalities over fixed and time-varying networks. We present lower complexity
bounds for both communication and local iterations and construct optimal
algorithms that match these lower bounds. Our algorithms are the best among the
available literature not only in the decentralized stochastic case, but also in
the decentralized deterministic and non-distributed stochastic cases.
Experimental results confirm the effectiveness of the presented algorithms.
Related papers
- Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates [28.700663352789395]
We provide a unified convergence analysis of communication-efficient local training methods for distributed variational inequality problems (VIPs)
Our approach is based on a general key assumption on the estimates that allows us to propose and analyze several novel local training algorithms.
We present the first local descent-accent algorithms with provable improved communication complexity for solving distributed variational inequalities on heterogeneous data.
arXiv Detail & Related papers (2023-06-08T10:58:46Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities [91.12425544503395]
Variational inequalities are used in various applications ranging from equilibrium search to adversarial learning.
Most distributed approaches have a bottleneck - the cost of communications.
The three main techniques to reduce the total number of communication rounds and the cost of one such round are the similarity of local functions, compression of transmitted information, and local updates.
The methods presented in this paper have the best theoretical guarantees of communication complexity and are significantly ahead of other methods for distributed variational inequalities.
arXiv Detail & Related papers (2023-02-15T12:11:27Z) - Distributed Stochastic Optimization under a General Variance Condition [13.911633636387059]
Distributed optimization has drawn great attention recently due to its effectiveness in solving largescale machine learning problems.
We revisit the classical Federated Averaging (Avg) and establish the convergence results under only a mild variance for smooth non objective functions.
Almost a stationary convergence point is also established under the gradients condition.
arXiv Detail & Related papers (2023-01-30T05:48:09Z) - A Penalty-Based Method for Communication-Efficient Decentralized Bilevel Programming [14.35928967799696]
This paper introduces a penalty function-based decentralized algorithm for solving bilevel programming problems over a decentralized network.
A key feature of the proposed algorithm is the estimation of the hyper-gradient of the penalty function.
Our theoretical framework ensures non-asymptotic convergence to the optimal solution of the original problem under various convexity conditions.
arXiv Detail & Related papers (2022-11-08T08:39:30Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.