Revisiting Rotation Averaging: Uncertainties and Robust Losses
- URL: http://arxiv.org/abs/2303.05195v1
- Date: Thu, 9 Mar 2023 11:51:20 GMT
- Title: Revisiting Rotation Averaging: Uncertainties and Robust Losses
- Authors: Ganlin Zhang, Viktor Larsson, Daniel Barath
- Abstract summary: We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar.
We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging.
- Score: 51.64986160468128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we revisit the rotation averaging problem applied in global
Structure-from-Motion pipelines. We argue that the main problem of current
methods is the minimized cost function that is only weakly connected with the
input data via the estimated epipolar geometries.We propose to better model the
underlying noise distributions by directly propagating the uncertainty from the
point correspondences into the rotation averaging. Such uncertainties are
obtained for free by considering the Jacobians of two-view refinements.
Moreover, we explore integrating a variant of the MAGSAC loss into the rotation
averaging problem, instead of using classical robust losses employed in current
frameworks. The proposed method leads to results superior to baselines, in
terms of accuracy, on large-scale public benchmarks. The code is public.
https://github.com/zhangganlin/GlobalSfMpy
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Geometric Median (GM) Matching for Robust Data Pruning [29.458270105150564]
Data pruning is crucial for mitigating the enormous computational costs associated with training data-hungry models at scale.
In this work, we propose Geometric ($gm$) Matching that yields a $k$-subset such that the mean of the subset approximates the median of the (potentially) noisy dataset.
Experiments across popular deep learning benchmarks indicate that $gm$ Matching consistently outperforms prior state-of-the-art.
arXiv Detail & Related papers (2024-06-25T00:02:01Z) - RAGO: Recurrent Graph Optimizer For Multiple Rotation Averaging [62.315673415889314]
This paper proposes a deep recurrent Rotation Averaging Graph (RAGO) for Multiple Rotation Averaging (MRA)
Our framework is a real-time learning-to-optimize rotation averaging graph with a tiny size deployed for real-world applications.
arXiv Detail & Related papers (2022-12-14T13:19:40Z) - Learning to Estimate Hidden Motions with Global Motion Aggregation [71.12650817490318]
Occlusions pose a significant challenge to optical flow algorithms that rely on local evidences.
We introduce a global motion aggregation module to find long-range dependencies between pixels in the first image.
We demonstrate that the optical flow estimates in the occluded regions can be significantly improved without damaging the performance in non-occluded regions.
arXiv Detail & Related papers (2021-04-06T10:32:03Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z) - Sparse recovery by reduced variance stochastic approximation [5.672132510411465]
We discuss application of iterative quadratic optimization routines to the problem of sparse signal recovery from noisy observation.
We show how one can straightforwardly enhance reliability of the corresponding solution by using Median-of-Means like techniques.
arXiv Detail & Related papers (2020-06-11T12:31:20Z) - Quasi-Newton Solver for Robust Non-Rigid Registration [35.66014845211251]
We propose a formulation for robust non-rigid registration based on a globally smooth robust estimator for data fitting and regularization.
We apply the majorization-minimization algorithm to the problem, which reduces each iteration to solving a simple least-squares problem with L-BFGS.
arXiv Detail & Related papers (2020-04-09T01:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.