Fast and Robust Non-Rigid Registration Using Accelerated
Majorization-Minimization
- URL: http://arxiv.org/abs/2206.03410v2
- Date: Wed, 8 Jun 2022 15:54:09 GMT
- Title: Fast and Robust Non-Rigid Registration Using Accelerated
Majorization-Minimization
- Authors: Yuxin Yao and Bailin Deng and Weiwei Xu and Juyong Zhang
- Abstract summary: Non-rigid registration, which deforms a source shape in a non-rigid way to align with a target shape, is a classical problem in computer vision.
Existing methods typically adopt the $ell_p$ type robust norm to measure the alignment error and regularize the smoothness of deformation.
We propose a formulation for robust non-rigid registration based on a globally smooth robust norm for alignment and regularization.
- Score: 35.66014845211251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-rigid registration, which deforms a source shape in a non-rigid way to
align with a target shape, is a classical problem in computer vision. Such
problems can be challenging because of imperfect data (noise, outliers and
partial overlap) and high degrees of freedom. Existing methods typically adopt
the $\ell_{p}$ type robust norm to measure the alignment error and regularize
the smoothness of deformation, and use a proximal algorithm to solve the
resulting non-smooth optimization problem. However, the slow convergence of
such algorithms limits their wide applications. In this paper, we propose a
formulation for robust non-rigid registration based on a globally smooth robust
norm for alignment and regularization, which can effectively handle outliers
and partial overlaps. The problem is solved using the majorization-minimization
algorithm, which reduces each iteration to a convex quadratic problem with a
closed-form solution. We further apply Anderson acceleration to speed up the
convergence of the solver, enabling the solver to run efficiently on devices
with limited compute capability. Extensive experiments demonstrate the
effectiveness of our method for non-rigid alignment between two shapes with
outliers and partial overlaps, with quantitative evaluation showing that it
outperforms state-of-the-art methods in terms of registration accuracy and
computational speed. The source code is available at
https://github.com/yaoyx689/AMM_NRR.
Related papers
- SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid Registration [76.40993825836222]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.
The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - Fast Screening Rules for Optimal Design via Quadratic Lasso
Reformulation [0.135975510645475]
In this work, we derive safe screening rules that can be used to discard inessential samples.
The new tests are much faster to compute, especially for problems involving a parameter space of high dimension.
We show how an existing homotopy algorithm to compute the regularization path of the lasso method can be reparametrized with respect to the squared $ell_$-penalty.
arXiv Detail & Related papers (2023-10-13T08:10:46Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Slowly Varying Regression under Sparsity [5.22980614912553]
We present the framework of slowly hyper regression under sparsity, allowing regression models to exhibit slow and sparse variations.
We suggest a procedure that reformulates as a binary convex algorithm.
We show that the resulting model outperforms competing formulations in comparable times across various datasets.
arXiv Detail & Related papers (2021-02-22T04:51:44Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Quasi-Newton Solver for Robust Non-Rigid Registration [35.66014845211251]
We propose a formulation for robust non-rigid registration based on a globally smooth robust estimator for data fitting and regularization.
We apply the majorization-minimization algorithm to the problem, which reduces each iteration to solving a simple least-squares problem with L-BFGS.
arXiv Detail & Related papers (2020-04-09T01:45:05Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.