BO-ICP: Initialization of Iterative Closest Point Based on Bayesian
Optimization
- URL: http://arxiv.org/abs/2304.13114v1
- Date: Tue, 25 Apr 2023 19:38:53 GMT
- Title: BO-ICP: Initialization of Iterative Closest Point Based on Bayesian
Optimization
- Authors: Harel Biggie, Andrew Beathard, Christoffer Heckman
- Abstract summary: We present a new method based on Bayesian optimization for finding the critical initial ICP transform.
We show that our approach outperforms state-of-the-art methods when given similar computation time.
It is compatible with other improvements to ICP, as it focuses solely on the selection of an initial transform.
- Score: 3.248584983235657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Typical algorithms for point cloud registration such as Iterative Closest
Point (ICP) require a favorable initial transform estimate between two point
clouds in order to perform a successful registration. State-of-the-art methods
for choosing this starting condition rely on stochastic sampling or global
optimization techniques such as branch and bound. In this work, we present a
new method based on Bayesian optimization for finding the critical initial ICP
transform. We provide three different configurations for our method which
highlights the versatility of the algorithm to both find rapid results and
refine them in situations where more runtime is available such as offline map
building. Experiments are run on popular data sets and we show that our
approach outperforms state-of-the-art methods when given similar computation
time. Furthermore, it is compatible with other improvements to ICP, as it
focuses solely on the selection of an initial transform, a starting point for
all ICP-based methods.
Related papers
- POLAFFINI: Efficient feature-based polyaffine initialization for improved non-linear image registration [2.6821469866843435]
This paper presents an efficient feature-based approach to initialize non-linear image registration.
A good estimate of the initial transformation is essential, both for traditional iterative algorithms and for recent one-shot deep learning (DL)-based alternatives.
arXiv Detail & Related papers (2024-07-04T13:36:29Z) - Per-run Algorithm Selection with Warm-starting using Trajectory-based
Features [5.073358743426584]
Per-instance algorithm selection seeks to recommend, for a given problem instance, one or several suitable algorithms.
We propose an online algorithm selection scheme which we coin per-run algorithm selection.
We show that our approach outperforms static per-instance algorithm selection.
arXiv Detail & Related papers (2022-04-20T14:30:42Z) - On Second-order Optimization Methods for Federated Learning [59.787198516188425]
We evaluate the performance of several second-order distributed methods with local steps in the federated learning setting.
We propose a novel variant that uses second-order local information for updates and a global line search to counteract the resulting local specificity.
arXiv Detail & Related papers (2021-09-06T12:04:08Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Data-driven Weight Initialization with Sylvester Solvers [72.11163104763071]
We propose a data-driven scheme to initialize the parameters of a deep neural network.
We show that our proposed method is especially effective in few-shot and fine-tuning settings.
arXiv Detail & Related papers (2021-05-02T07:33:16Z) - Solving Inverse Problems by Joint Posterior Maximization with
Autoencoding Prior [0.0]
We address the problem of solving ill-posed inverse problems in imaging where the prior is a JPal autoencoder (VAE)
We show that our technique is quite sufficient that it satisfies the proposed objective function.
Results also show the robustness of our approach to provide more robust estimates.
arXiv Detail & Related papers (2021-03-02T11:18:34Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Fast and Robust Iterative Closest Point [32.42799285301607]
Iterative Closest Point (ICP) is a fundamental technique for rigid registration between two point sets.
Recent work such as Sparse ICP achieves robustness via sparsity optimization at the cost of computational speed.
We show that the classical point-to-point ICP can be treated as a majorization-minimization (MM) algorithm, and propose an Anderson acceleration approach to speed up its convergence.
arXiv Detail & Related papers (2020-07-15T11:32:53Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - FedSplit: An algorithmic framework for fast federated optimization [40.42352500741025]
We introduce FedSplit, a class of algorithms for solving distributed convex minimization with additive structure.
Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities.
arXiv Detail & Related papers (2020-05-11T16:30:09Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.