Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
Optimization
- URL: http://arxiv.org/abs/2109.15292v1
- Date: Thu, 30 Sep 2021 17:36:32 GMT
- Title: Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
Optimization
- Authors: Kaiwen Zhou, Anthony Man-Cho So, James Cheng
- Abstract summary: We prove that our new accelerated method requires the same linear speed-up condition as the existing non-accelerated methods.
Our core algorithmic discovery is a new accelerated SVRG variant with sparse updates.
- Score: 47.03235286622727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that stochastic acceleration can be achieved under the perturbed
iterate framework (Mania et al., 2017) in asynchronous lock-free optimization,
which leads to the optimal incremental gradient complexity for finite-sum
objectives. We prove that our new accelerated method requires the same linear
speed-up condition as the existing non-accelerated methods. Our core
algorithmic discovery is a new accelerated SVRG variant with sparse updates.
Empirical results are presented to verify our theoretical findings.
Related papers
- Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation [51.248784084461334]
We prove new convergence rates for a generalized version of Nesterov acceleration underrho conditions.
Our analysis reduces the dependence on the strong growth constant from $$ to $sqrt$ as compared to prior work.
arXiv Detail & Related papers (2024-04-03T00:41:19Z) - Reweighted Interacting Langevin Diffusions: an Accelerated Sampling
Methodfor Optimization [28.25662317591378]
We propose a new technique to accelerate sampling methods for solving difficult optimization problems.
Our method investigates the connection between posterior distribution sampling and optimization with Langevin dynamics.
arXiv Detail & Related papers (2023-01-30T03:48:20Z) - Asynchronous Iterations in Optimization: New Sequence Results and
Sharper Algorithmic Guarantees [10.984101749941471]
We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms.
Results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates.
arXiv Detail & Related papers (2021-09-09T19:08:56Z) - Acceleration Methods [57.202881673406324]
We first use quadratic optimization problems to introduce two key families of acceleration methods.
We discuss momentum methods in detail, starting with the seminal work of Nesterov.
We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates.
arXiv Detail & Related papers (2021-01-23T17:58:25Z) - On The Verification of Neural ODEs with Stochastic Guarantees [14.490826225393096]
We show that Neural ODEs, an emerging class of timecontinuous neural networks, can be verified by solving a set of global-optimization problems.
We introduce Lagran Reachability ( SLR), an abstraction-based technique for constructing a tight Reachtube.
arXiv Detail & Related papers (2020-12-16T11:04:34Z) - An Accelerated DFO Algorithm for Finite-sum Convex Functions [0.0]
Derivative-free optimizationDFO) has recently gained a lot momentum in machine learning.
In this paper, we exploit the finite-sum objective functions in order to design accessible DFO algorithms.
arXiv Detail & Related papers (2020-07-07T09:57:31Z) - Hessian-Free High-Resolution Nesterov Acceleration for Sampling [55.498092486970364]
Nesterov's Accelerated Gradient (NAG) for optimization has better performance than its continuous time limit (noiseless kinetic Langevin) when a finite step-size is employed.
This work explores the sampling counterpart of this phenonemon and proposes a diffusion process, whose discretizations can yield accelerated gradient-based MCMC methods.
arXiv Detail & Related papers (2020-06-16T15:07:37Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.