Majorization Minimization Methods for Distributed Pose Graph
Optimization with Convergence Guarantees
- URL: http://arxiv.org/abs/2003.05353v3
- Date: Tue, 4 May 2021 14:18:39 GMT
- Title: Majorization Minimization Methods for Distributed Pose Graph
Optimization with Convergence Guarantees
- Authors: Taosha Fan and Todd Murphey
- Abstract summary: We show that our proposed methods are guaranteed to converge to first-order critical points under mild conditions.
Since our proposed methods rely a proximal operator of distributed PGO, the convergence rate can be significantly accelerated.
The efficacy of this work is validated through applications on a number of 2D and 3D SLAM datasets.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem of distributed pose graph optimization
(PGO) that has extensive applications in multi-robot simultaneous localization
and mapping (SLAM). We propose majorization minimization methods to distributed
PGO and show that our proposed methods are guaranteed to converge to
first-order critical points under mild conditions. Furthermore, since our
proposed methods rely a proximal operator of distributed PGO, the convergence
rate can be significantly accelerated with Nesterov's method, and more
importantly, the acceleration induces no compromise of theoretical guarantees.
In addition, we also present accelerated majorization minimization methods to
the distributed chordal initialization that have a quadratic convergence, which
can be used to compute an initial guess for distributed PGO. The efficacy of
this work is validated through applications on a number of 2D and 3D SLAM
datasets and comparisons with existing state-of-the-art methods, which
indicates that our proposed methods have faster convergence and result in
better solutions to distributed PGO.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Decentralized Entropic Optimal Transport for Distributed Distribution Comparison [28.122302344024387]
We propose a novel decentralized entropic optimal transport (DEOT) method, which provides a communication-efficient and privacy-preserving solution to this problem.
In particular, we design a mini-batch randomized block-coordinate descent scheme to optimize the DEOT distance in its dual form.
We show that the proposed MRBCD scheme and kernel approximation method also apply to entropic Gromov-Wasserstein distance.
arXiv Detail & Related papers (2023-01-28T02:47:42Z) - Distributed and Stochastic Optimization Methods with Gradient
Compression and Local Steps [0.0]
We propose theoretical frameworks for the analysis and distributed methods with error compensation and local updates.
We develop more than 20 new optimization methods, including the first linearly converging Error-pensated and first distributed Local-SGD methods.
arXiv Detail & Related papers (2021-12-20T16:12:54Z) - Acceleration Methods [57.202881673406324]
We first use quadratic optimization problems to introduce two key families of acceleration methods.
We discuss momentum methods in detail, starting with the seminal work of Nesterov.
We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates.
arXiv Detail & Related papers (2021-01-23T17:58:25Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - FedSplit: An algorithmic framework for fast federated optimization [40.42352500741025]
We introduce FedSplit, a class of algorithms for solving distributed convex minimization with additive structure.
Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities.
arXiv Detail & Related papers (2020-05-11T16:30:09Z) - On Distributed Non-convex Optimization: Projected Subgradient Method For
Weakly Convex Problems in Networks [13.385373310554327]
The Moreau subgradient method converges linear sharpness problems in machine learning.
A distributed implementation of the subgradient method with a theoretical guarantee is proposed.
arXiv Detail & Related papers (2020-04-28T01:01:49Z) - Detached Error Feedback for Distributed SGD with Random Sparsification [98.98236187442258]
Communication bottleneck has been a critical problem in large-scale deep learning.
We propose a new distributed error feedback (DEF) algorithm, which shows better convergence than error feedback for non-efficient distributed problems.
We also propose DEFA to accelerate the generalization of DEF, which shows better bounds than DEF.
arXiv Detail & Related papers (2020-04-11T03:50:59Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.