Deep Point-to-Plane Registration by Efficient Backpropagation for Error
Minimizing Function
- URL: http://arxiv.org/abs/2207.06661v1
- Date: Thu, 14 Jul 2022 05:18:20 GMT
- Title: Deep Point-to-Plane Registration by Efficient Backpropagation for Error
Minimizing Function
- Authors: Tatsuya Yatagawa and Yutaka Ohtake and Hiromasa Suzuki
- Abstract summary: Traditional algorithms of point set registration often achieve a better estimation of rigid transformation than those minimizing point-to-point distances.
Recent deep-learning-based methods minimize the point-to-point distances.
This paper proposes the first deep-learning-based approach to point-to-plane registration.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional algorithms of point set registration minimizing point-to-plane
distances often achieve a better estimation of rigid transformation than those
minimizing point-to-point distances. Nevertheless, recent deep-learning-based
methods minimize the point-to-point distances. In contrast to these methods,
this paper proposes the first deep-learning-based approach to point-to-plane
registration. A challenging part of this problem is that a typical solution for
point-to-plane registration requires an iterative process of accumulating small
transformations obtained by minimizing a linearized energy function. The
iteration significantly increases the size of the computation graph needed for
backpropagation and can slow down both forward and backward network
evaluations. To solve this problem, we consider the estimated rigid
transformation as a function of input point clouds and derive its analytic
gradients using the implicit function theorem. The analytic gradient that we
introduce is independent of how the error minimizing function (i.e., the rigid
transformation) is obtained, thus allowing us to calculate both the rigid
transformation and its gradient efficiently. We implement the proposed
point-to-plane registration module over several previous methods that minimize
point-to-point distances and demonstrate that the extensions outperform the
base methods even with point clouds with noise and low-quality point normals
estimated with local point distributions.
Related papers
- Efficient Low-rank Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization [8.879403568685499]
We introduce an adaptive updating strategy for smoothing parameters.
This behavior transforms the algorithm into one that effectively solves problems after a few iterations.
We prove the global proposed experiment, guaranteeing that every iteration is a critical one.
arXiv Detail & Related papers (2024-06-22T02:37:13Z) - SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid Registration [76.40993825836222]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.
The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - How to escape sharp minima with random perturbations [48.095392390925745]
We study the notion of flat minima and the complexity of finding them.
For general cost functions, we discuss a gradient-based algorithm that finds an approximate flat local minimum efficiently.
For the setting where the cost function is an empirical risk over training data, we present a faster algorithm that is inspired by a recently proposed practical algorithm called sharpness-aware minimization.
arXiv Detail & Related papers (2023-05-25T02:12:33Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Resolving learning rates adaptively by locating Stochastic Non-Negative
Associated Gradient Projection Points using line searches [0.0]
Learning rates in neural network training are currently determined a priori to training using expensive manual or automated tuning.
This study proposes gradient-only line searches to resolve the learning rate for neural network training algorithms.
arXiv Detail & Related papers (2020-01-15T03:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.