Fast Coherent Point Drift
- URL: http://arxiv.org/abs/2006.06281v1
- Date: Thu, 11 Jun 2020 09:35:23 GMT
- Title: Fast Coherent Point Drift
- Authors: Xiang-Wei Feng, Da-Zheng Feng, Yun Zhu
- Abstract summary: Coherent point drift (CPD) is a classical method for nonrigid point set registration.
By introducing a simple corresponding constraint, we develop a fast implementation of CPD.
Experimental results in 3D point cloud data show that our method can significantly reduce burden of the registration process.
- Score: 4.369046007546103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonrigid point set registration is widely applied in the tasks of computer
vision and pattern recognition. Coherent point drift (CPD) is a classical
method for nonrigid point set registration. However, to solve spatial
transformation functions, CPD has to compute inversion of a M*M matrix per
iteration with time complexity O(M3). By introducing a simple corresponding
constraint, we develop a fast implementation of CPD. The most advantage of our
method is to avoid matrix-inverse operation. Before the iteration begins, our
method requires to take eigenvalue decomposition of a M*M matrix once. After
iteration begins, our method only needs to update a diagonal matrix with linear
computational complexity, and perform matrix multiplication operation with time
complexity approximately O(M2) in each iteration. Besides, our method can be
further accelerated by the low-rank matrix approximation. Experimental results
in 3D point cloud data show that our method can significantly reduce
computation burden of the registration process, and keep comparable performance
with CPD on accuracy.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Discrete Morse Sandwich: Fast Computation of Persistence Diagrams for
Scalar Data -- An Algorithm and A Benchmark [8.648433479399857]
This paper introduces an efficient algorithm for persistence diagram computation, given an input piecewise linear scalar field f defined on a d-dimensional simplicial complex K.
We express this algorithm within the setting of discrete Morse theory, which considerably reduces the number of input simplices to consider.
We also introduce a stratification approach to the problem, that we call "sandwiching"
arXiv Detail & Related papers (2022-06-27T10:54:24Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Fast Differentiable Matrix Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP)
The other method is to use Matrix Pad'e Approximants (MPA)
arXiv Detail & Related papers (2022-01-21T12:18:06Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Momentum-inspired Low-Rank Coordinate Descent for Diagonally Constrained
SDPs [12.7944665592057]
We present a novel, practical, and provable approach for solving constrained semidefinite programming (SDP) problems at scale using accelerated non-trivial programming.
arXiv Detail & Related papers (2021-06-16T13:35:40Z) - Fast and Accurate Pseudoinverse with Sparse Matrix Reordering and
Incremental Approach [4.710916891482697]
A pseudoinverse is a generalization of a matrix inverse, which has been extensively utilized in machine learning.
FastPI is a novel incremental singular value decomposition (SVD) based pseudoinverse method for sparse matrices.
We show that FastPI computes the pseudoinverse faster than other approximate methods without loss of accuracy.
arXiv Detail & Related papers (2020-11-09T07:47:10Z) - Fast and Robust Iterative Closest Point [32.42799285301607]
Iterative Closest Point (ICP) is a fundamental technique for rigid registration between two point sets.
Recent work such as Sparse ICP achieves robustness via sparsity optimization at the cost of computational speed.
We show that the classical point-to-point ICP can be treated as a majorization-minimization (MM) algorithm, and propose an Anderson acceleration approach to speed up its convergence.
arXiv Detail & Related papers (2020-07-15T11:32:53Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.