Robust affine point matching via quadratic assignment on Grassmannians
- URL: http://arxiv.org/abs/2303.02698v5
- Date: Mon, 07 Oct 2024 22:02:12 GMT
- Title: Robust affine point matching via quadratic assignment on Grassmannians
- Authors: Alexander Kolpakov, Michael Werman,
- Abstract summary: Robust Affine Matching with Grassmannians (RoAM) is a new algorithm to perform affine registration of point clouds.
The algorithm is based on minimizing the Frobenius distance between two elements of the Grassmannian.
- Score: 50.366876079978056
- License:
- Abstract: Robust Affine Matching with Grassmannians (RoAM) is a new algorithm to perform affine registration of point clouds. The algorithm is based on minimizing the Frobenius distance between two elements of the Grassmannian. For this purpose, an indefinite relaxation of the Quadratic Assignment Problem (QAP) is used, and several approaches to affine feature matching are studied and compared. Experiments demonstrate that RoAM is more robust to noise and point discrepancy than previous methods.
Related papers
- Taming Score-Based Diffusion Priors for Infinite-Dimensional Nonlinear Inverse Problems [4.42498215122234]
This work introduces a sampling method capable of solving Bayesian inverse problems in function space.
It does not assume the log-concavity of the likelihood, meaning that it is compatible with nonlinear inverse problems.
A novel convergence analysis is conducted, inspired by the fixed-point methods established for traditional regularization-by-denoising algorithms.
arXiv Detail & Related papers (2024-05-24T16:17:01Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Sharp Variance-Dependent Bounds in Reinforcement Learning: Best of Both
Worlds in Stochastic and Deterministic Environments [48.96971760679639]
We study variance-dependent regret bounds for Markov decision processes (MDPs)
We propose two new environment norms to characterize the fine-grained variance properties of the environment.
For model-based methods, we design a variant of the MVP algorithm.
In particular, this bound is simultaneously minimax optimal for both and deterministic MDPs.
arXiv Detail & Related papers (2023-01-31T06:54:06Z) - Overlap-guided Gaussian Mixture Models for Point Cloud Registration [61.250516170418784]
Probabilistic 3D point cloud registration methods have shown competitive performance in overcoming noise, outliers, and density variations.
This paper proposes a novel overlap-guided probabilistic registration approach that computes the optimal transformation from matched Gaussian Mixture Model (GMM) parameters.
arXiv Detail & Related papers (2022-10-17T08:02:33Z) - Reinforcement Learning with Unbiased Policy Evaluation and Linear
Function Approximation [11.345796608258434]
We provide performance guarantees for a variant of simulation-based policy iteration for controlling Markov decision processes.
We analyze two algorithms; the first algorithm involves a least squares approach where a new set of weights associated with feature vectors is obtained via at least squares at each iteration.
The second algorithm involves a two-time-scale approximation algorithm taking several steps of gradient descent towards the least squares solution.
arXiv Detail & Related papers (2022-10-13T20:16:19Z) - Fuzzy Clustering by Hyperbolic Smoothing [0.0]
We propose a novel method for building fuzzy clusters of large data sets, using a smoothing numerical approach.
The smoothing allows a conversion from a strongly non-differentiable problem into differentiable subproblems of optimization without constraints of low dimension.
arXiv Detail & Related papers (2022-07-09T12:40:46Z) - Accelerated SGD for Non-Strongly-Convex Least Squares [14.010916616909743]
We consider approximation for the least squares regression problem in the non-strongly convex setting.
We present the first practical algorithm that achieves the optimal prediction error rates in terms of dependence on the noise of the problem.
arXiv Detail & Related papers (2022-03-03T14:39:33Z) - Stochastic Projective Splitting: Solving Saddle-Point Problems with
Multiple Regularizers [4.568911586155097]
We present a new, variant of the projective splitting (PS) family of monotone algorithms for inclusion problems.
It can solve min-max and noncooperative game formulations arising in applications such as robust ML without the convergence issues associated with gradient descent-ascent.
arXiv Detail & Related papers (2021-06-24T14:48:43Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.