Less Biased Noise Scale Estimation for Threshold-Robust RANSAC
- URL: http://arxiv.org/abs/2503.13433v2
- Date: Mon, 07 Apr 2025 07:15:46 GMT
- Title: Less Biased Noise Scale Estimation for Threshold-Robust RANSAC
- Authors: Johan Edstedt,
- Abstract summary: We revisit the noise scale estimation method SIMFIT and find bias in the estimate of the noise scale.<n>We propose a multi-pair extension of SIMFIT++, by filtering of estimates, which improves results.
- Score: 0.9065034043031668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The gold-standard for robustly estimating relative pose through image matching is RANSAC. While RANSAC is powerful, it requires setting the inlier threshold that determines whether the error of a correspondence under an estimated model is sufficiently small to be included in its consensus set. Setting this threshold is typically done by hand, and is difficult to tune without an access to ground truth data. Thus, a method capable of automatically determining the optimal threshold would be desirable. In this paper we revisit inlier noise scale estimation, which is an attractive approach as the inlier noise scale is linear to the optimal threshold. We revisit the noise scale estimation method SIMFIT and find bias in the estimate of the noise scale. In particular, we fix underestimates from using the same data for fitting the model as estimating the inlier noise, and from not taking the threshold itself into account. Secondly, since the optimal threshold within a scene is approximately constant we propose a multi-pair extension of SIMFIT++, by filtering of estimates, which improves results. Our approach yields robust performance across a range of thresholds, shown in Figure 1. Code is available at https://github.com/Parskatt/simfitpp
Related papers
- ARMAX identification of low rank graphical models [0.6906005491572401]
In large-scale systems, complex internal relationships are often present. Such interconnected systems can be effectively described by low rank processes.<n>Existing low rank identification approaches often did not take noise into explicit consideration, leading to non-negligible inaccuracies even under weak noise.
arXiv Detail & Related papers (2025-01-16T15:43:32Z) - Theoretical Analysis of Explicit Averaging and Novel Sign Averaging in
Comparison-Based Search [6.883986852278248]
In black-box optimization, noise in the objective function is inevitable.
Explicit averaging is widely used as a simple and versatile noise-handling technique.
Alternatively, sign averaging is proposed as a simple but robust noise-handling technique.
arXiv Detail & Related papers (2024-01-25T08:35:50Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Label Noise: Correcting the Forward-Correction [0.0]
Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels.
We propose an approach to tackling overfitting caused by label noise.
Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting.
arXiv Detail & Related papers (2023-07-24T19:41:19Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - A Robust Optimization Method for Label Noisy Datasets Based on Adaptive
Threshold: Adaptive-k [0.0]
SGD does not produce robust results on datasets with label noise.
In this paper, we recommend using samples with loss less than a threshold value determined during the optimization process, instead of using all samples in the mini-batch.
Our proposed method, Adaptive-k, aims to exclude label noise samples from the optimization process and make the process robust.
arXiv Detail & Related papers (2022-03-26T21:48:12Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Estimating Rank-One Spikes from Heavy-Tailed Noise via Self-Avoiding
Walks [13.879536370173506]
We study symmetric spiked matrix models with respect to a general class of noise distributions.
We exhibit an estimator that works for heavy-tailed noise up to the BBP threshold that is optimal even for Gaussian noise.
Our estimator can be evaluated in time by counting self-avoiding walks via a color-coding technique.
arXiv Detail & Related papers (2020-08-31T16:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.