Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust
Algorithms
- URL: http://arxiv.org/abs/2010.13112v8
- Date: Fri, 8 Jul 2022 08:37:04 GMT
- Title: Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust
Algorithms
- Authors: Aleksandr Beznosikov, Valentin Samokhin, Alexander Gasnikov
- Abstract summary: This paper focuses on the distributed optimization of saddle point problems.
The experimental part of the paper, we show the effectiveness of our distributed method in practice.
- Score: 125.99533416395765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on the distributed optimization of stochastic saddle point
problems. The first part of the paper is devoted to lower bounds for the
cenralized and decentralized distributed methods for smooth (strongly)
convex-(strongly) concave saddle-point problems as well as the near-optimal
algorithms by which these bounds are achieved. Next, we present a new federated
algorithm for cenralized distributed saddle point problems - Extra Step Local
SGD. Theoretical analysis of the new method is carried out for strongly
convex-strongly concave and non-convex-non-concave problems. In the
experimental part of the paper, we show the effectiveness of our method in
practice. In particular, we train GANs in a distributed manner.
Related papers
- First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Decentralized Inexact Proximal Gradient Method With Network-Independent
Stepsizes for Convex Composite Optimization [39.352542703876104]
This paper considers decentralized convex composite optimization over undirected and connected networks.
A novel CTA (Combine-Then-Adapt)-based decentralized algorithm is proposed under uncoordinated network-independent constant stepsizes.
arXiv Detail & Related papers (2023-02-07T03:50:38Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - A Stochastic Bregman Primal-Dual Splitting Algorithm for Composite
Optimization [2.9112649816695204]
We study a first order primal-dual method for solving convex-concave saddle point problems over real Banach spaces.
Our framework is general and does not need strong convexity of the entropies inducing the Bregman divergences in the algorithm.
Numerical applications are considered including entropically regularized Wasserstein barycenter problems and regularized inverse problems on the simplex.
arXiv Detail & Related papers (2021-12-22T14:47:44Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - On Distributed Non-convex Optimization: Projected Subgradient Method For
Weakly Convex Problems in Networks [13.385373310554327]
The Moreau subgradient method converges linear sharpness problems in machine learning.
A distributed implementation of the subgradient method with a theoretical guarantee is proposed.
arXiv Detail & Related papers (2020-04-28T01:01:49Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.