Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees
- URL: http://arxiv.org/abs/2308.09187v1
- Date: Thu, 17 Aug 2023 21:15:04 GMT
- Title: Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees
- Authors: Ali Ramezani-Kebrya and Kimon Antonakopoulos and Igor Krawczuk and
Justin Deschenaux and Volkan Cevher
- Abstract summary: We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
- Score: 60.571030754252824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider monotone variational inequality (VI) problems in multi-GPU
settings where multiple processors/workers/clients have access to local
stochastic dual vectors. This setting includes a broad range of important
problems from distributed convex minimization to min-max and games.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not
been designed to be communication-efficient. To this end, we propose a
quantized generalized extra-gradient (Q-GenX), which is an unbiased and
adaptive compression method tailored to solve VIs. We provide an adaptive
step-size rule, which adapts to the respective noise profiles at hand and
achieve a fast rate of ${\mathcal O}(1/T)$ under relative noise, and an
order-optimal ${\mathcal O}(1/\sqrt{T})$ under absolute noise and show
distributed training accelerates convergence. Finally, we validate our
theoretical results by providing real-world experiments and training generative
adversarial networks on multiple GPUs.
Related papers
- Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Variance-reduced accelerated methods for decentralized stochastic
double-regularized nonconvex strongly-concave minimax problems [7.5573375809946395]
We consider a network of $m$ computing agents collaborate via peer-to-peer communications.
Our algorithmic framework introduces agrangian multiplier to eliminate the consensus constraint on the dual variable.
To the best of our knowledge, this is the first work which provides convergence guarantees for NCSC minimax problems with general non regularizers applied to both the primal and dual variables.
arXiv Detail & Related papers (2023-07-14T01:32:16Z) - Global Nash Equilibrium in Non-convex Multi-player Game: Theory and
Algorithms [66.8634598612777]
We show that Nash equilibrium (NE) is acceptable to all players in a multi-player game.
We also show that no one can benefit unilaterally from the general theory step by step.
arXiv Detail & Related papers (2023-01-19T11:36:50Z) - Private optimization in the interpolation regime: faster rates and
hardness results [9.405458160620533]
In non-private convex optimization, gradient methods converge much faster on problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones.
We show (near) exponential improvements in the private sample complexity.
arXiv Detail & Related papers (2022-10-31T05:18:27Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Decentralized Gossip-Based Stochastic Bilevel Optimization over
Communication Networks [42.76623191830371]
We propose a gossip-based distributed bilevel optimization algorithm.
Agents can solve both networked and outer problems in a single time.
Our algorithm achieves the state-of-the-art efficiency and test accuracy.
arXiv Detail & Related papers (2022-06-22T06:38:54Z) - Distributed stochastic proximal algorithm with random reshuffling for
non-smooth finite-sum optimization [28.862321453597918]
Non-smooth finite-sum minimization is a fundamental problem in machine learning.
This paper develops a distributed proximal-gradient algorithm with random reshuffling to solve the problem.
arXiv Detail & Related papers (2021-11-06T07:29:55Z) - Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Minimax
Problems [80.46370778277186]
Large scale convex-concave minimax problems arise in numerous applications, including game theory, robust training, and training of generative adversarial networks.
We develop a communication-efficient distributed extragrad algorithm, LocalAdaSient, with an adaptive learning rate suitable for solving convex-concave minimax problem in the.
Server model.
We demonstrate its efficacy through several experiments in both the homogeneous and heterogeneous settings.
arXiv Detail & Related papers (2021-06-18T09:42:05Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - A conditional one-output likelihood formulation for multitask Gaussian
processes [0.0]
Multitask Gaussian processes (MTGP) are the Gaussian process framework's solution for multioutput regression problems.
Here we introduce a novel approach that simplifies the multitask learning.
We show that it is computationally competitive with state of the art options.
arXiv Detail & Related papers (2020-06-05T14:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.