Revisiting EXTRA for Smooth Distributed Optimization
- URL: http://arxiv.org/abs/2002.10110v2
- Date: Thu, 18 Jun 2020 04:38:13 GMT
- Title: Revisiting EXTRA for Smooth Distributed Optimization
- Authors: Huan Li and Zhouchen Lin
- Abstract summary: We give a sharp complexity analysis for EXTRA with the improved $Oleft(left(fracLmu+frac11-sigma_2(W)right)logfrac1epsilon (1-sigma_2(W))right)$.
Our communication complexities of the accelerated EXTRA are only worse by the factors of $left(logfracLmu (1-sigma_2(W))right)$ and $left(logfrac1epsilon (1-
- Score: 70.65867695317633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: EXTRA is a popular method for dencentralized distributed optimization and has
broad applications. This paper revisits EXTRA. First, we give a sharp
complexity analysis for EXTRA with the improved
$O\left(\left(\frac{L}{\mu}+\frac{1}{1-\sigma_2(W)}\right)\log\frac{1}{\epsilon(1-\sigma_2(W))}\right)$
communication and computation complexities for $\mu$-strongly convex and
$L$-smooth problems, where $\sigma_2(W)$ is the second largest singular value
of the weight matrix $W$. When the strong convexity is absent, we prove the
$O\left(\left(\frac{L}{\epsilon}+\frac{1}{1-\sigma_2(W)}\right)\log\frac{1}{1-\sigma_2(W)}\right)$
complexities. Then, we use the Catalyst framework to accelerate EXTRA and
obtain the $O\left(\sqrt{\frac{L}{\mu(1-\sigma_2(W))}}\log\frac{
L}{\mu(1-\sigma_2(W))}\log\frac{1}{\epsilon}\right)$ communication and
computation complexities for strongly convex and smooth problems and the
$O\left(\sqrt{\frac{L}{\epsilon(1-\sigma_2(W))}}\log\frac{1}{\epsilon(1-\sigma_2(W))}\right)$
complexities for non-strongly convex ones. Our communication complexities of
the accelerated EXTRA are only worse by the factors of
$\left(\log\frac{L}{\mu(1-\sigma_2(W))}\right)$ and
$\left(\log\frac{1}{\epsilon(1-\sigma_2(W))}\right)$ from the lower complexity
bounds for strongly convex and non-strongly convex problems, respectively.
Related papers
- Tight Lower Bounds under Asymmetric High-Order Hölder Smoothness and Uniform Convexity [6.972653925522813]
We provide tight lower bounds for the oracle complexity of minimizing high-order H"older smooth and uniformly convex functions.
Our analysis generalizes previous lower bounds for functions under first- and second-order smoothness as well as those for uniformly convex functions.
arXiv Detail & Related papers (2024-09-16T23:17:33Z) - Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity [22.615156512223763]
We propose variance- optimistic sliding (SVOGS) method, which takes the advantage of the finite-sum structure in the objective.
We prove $mathcal O(delta D2/varepsilon)$, communication complexity of $mathcal O(n+sqrtndelta D2/varepsilon)$, and local calls of $tildemathcal O(n+sqrtndelta+L)D2/varepsilon)$.
arXiv Detail & Related papers (2024-05-25T08:34:49Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - Optimal Gradient Sliding and its Application to Distributed Optimization
Under Similarity [121.83085611327654]
We structured convex optimization problems with additive objective $r:=p + q$, where $r$ is $mu$-strong convex similarity.
We proposed a method to solve problems master to agents' communication and local calls.
The proposed method is much sharper than the $mathcalO(sqrtL_q/mu)$ method.
arXiv Detail & Related papers (2022-05-30T14:28:02Z) - Sharper Rates for Separable Minimax and Finite Sum Optimization via
Primal-Dual Extragradient Methods [39.87865197769047]
We study separable minimax optimization problems $min_x max_y f(x) - g(y) + h(x, y)$, where $f$ and $g$ have smoothness and strong convexity parameters $(Lx, mux)$, $(Ly, muy)$, and $h$ is convex-concave with a $(Lambdaxx, Lambdaxy, Lambdayymuyright)$.
arXiv Detail & Related papers (2022-02-09T18:57:47Z) - Lifted Primal-Dual Method for Bilinearly Coupled Smooth Minimax
Optimization [47.27237492375659]
We study the bilinearly coupled minimax problem: $min_x max_y f(x) + ytop A x - h(y)$, where $f$ and $h$ are both strongly convex smooth functions.
No known first-order algorithms have hitherto achieved the lower complexity bound of $Omega(sqrtfracL_xmu_x + frac|A|sqrtmu_x,mu_y) log(frac1vareps
arXiv Detail & Related papers (2022-01-19T05:56:19Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Improved Algorithms for Convex-Concave Minimax Optimization [10.28639483137346]
This paper studies minimax optimization problems $min_x max_y f(x,y)$, where $f(x,y)$ is $m_x$-strongly convex with respect to $x$, $m_y$-strongly concave with respect to $y$ and $(L_x,L_xy,L_y)$-smooth.
arXiv Detail & Related papers (2020-06-11T12:21:13Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.