Linear Convergence of Distributed Mirror Descent with Integral Feedback
for Strongly Convex Problems
- URL: http://arxiv.org/abs/2011.12233v1
- Date: Tue, 24 Nov 2020 17:27:27 GMT
- Title: Linear Convergence of Distributed Mirror Descent with Integral Feedback
for Strongly Convex Problems
- Authors: Youbang Sun, Shahin Shahrampour
- Abstract summary: We study a continuous-time decentralized mirror descent that uses purely local gradient information to converge to the global optimal solution.
We prove that the algorithm indeed achieves (local) exponential convergence.
- Score: 11.498089180181365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed optimization often requires finding the minimum of a global
objective function written as a sum of local functions. A group of agents work
collectively to minimize the global function. We study a continuous-time
decentralized mirror descent algorithm that uses purely local gradient
information to converge to the global optimal solution. The algorithm enforces
consensus among agents using the idea of integral feedback. Recently, Sun and
Shahrampour (2020) studied the asymptotic convergence of this algorithm for
when the global function is strongly convex but local functions are convex.
Using control theory tools, in this work, we prove that the algorithm indeed
achieves (local) exponential convergence. We also provide a numerical
experiment on a real data-set as a validation of the convergence speed of our
algorithm.
Related papers
- Decentralized Riemannian Conjugate Gradient Method on the Stiefel
Manifold [59.73080197971106]
This paper presents a first-order conjugate optimization method that converges faster than the steepest descent method.
It aims to achieve global convergence over the Stiefel manifold.
arXiv Detail & Related papers (2023-08-21T08:02:16Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - An Algebraically Converging Stochastic Gradient Descent Algorithm for
Global Optimization [14.336473214524663]
A key component in the algorithm is the randomness based on the value of the objective function.
We prove the convergence of the algorithm with an algebra and tuning in the parameter space.
We present several numerical examples to demonstrate the efficiency and robustness of the algorithm.
arXiv Detail & Related papers (2022-04-12T16:27:49Z) - Distributed stochastic proximal algorithm with random reshuffling for
non-smooth finite-sum optimization [28.862321453597918]
Non-smooth finite-sum minimization is a fundamental problem in machine learning.
This paper develops a distributed proximal-gradient algorithm with random reshuffling to solve the problem.
arXiv Detail & Related papers (2021-11-06T07:29:55Z) - A Granular Sieving Algorithm for Deterministic Global Optimization [6.01919376499018]
A gradient-free deterministic method is developed to solve global optimization problems for Lipschitz continuous functions.
The method can be regarded as granular sieving with synchronous analysis in both the domain and range of the objective function.
arXiv Detail & Related papers (2021-07-14T10:03:03Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Distributed Mirror Descent with Integral Feedback: Asymptotic
Convergence Analysis of Continuous-time Dynamics [11.498089180181365]
This work addresses distributed optimization, where a network of agents wants to minimize a global strongly convex objective function.
We propose a continuous-time distributed mirror descent that uses purely local information to converge to the global optimum.
arXiv Detail & Related papers (2020-09-14T21:11:42Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - A Multi-Agent Primal-Dual Strategy for Composite Optimization over
Distributed Features [52.856801164425086]
We study multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) coupling function.
arXiv Detail & Related papers (2020-06-15T19:40:24Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.