Adaptive Reordering Sampler with Neurally Guided MAGSAC
- URL: http://arxiv.org/abs/2111.14093v3
- Date: Fri, 8 Sep 2023 16:23:49 GMT
- Title: Adaptive Reordering Sampler with Neurally Guided MAGSAC
- Authors: Tong Wei, Jiri Matas, Daniel Barath
- Abstract summary: We propose a new sampler for robust estimators that always selects the sample with the highest probability of consisting only of inliers.
After every unsuccessful iteration, the inlier probabilities are updated in a principled way via a Bayesian approach.
We introduce a new loss that exploits, in a geometrically justifiable manner, the orientation and scale that can be estimated for any type of feature.
- Score: 63.139445467355934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new sampler for robust estimators that always selects the sample
with the highest probability of consisting only of inliers. After every
unsuccessful iteration, the inlier probabilities are updated in a principled
way via a Bayesian approach. The probabilities obtained by the deep network are
used as prior (so-called neural guidance) inside the sampler. Moreover, we
introduce a new loss that exploits, in a geometrically justifiable manner, the
orientation and scale that can be estimated for any type of feature, e.g., SIFT
or SuperPoint, to estimate two-view geometry. The new loss helps to learn
higher-order information about the underlying scene geometry. Benefiting from
the new sampler and the proposed loss, we combine the neural guidance with the
state-of-the-art MAGSAC++. Adaptive Reordering Sampler with Neurally Guided
MAGSAC (ARS-MAGSAC) is superior to the state-of-the-art in terms of accuracy
and run-time on the PhotoTourism and KITTI datasets for essential and
fundamental matrix estimation. The code and trained models are available at
https://github.com/weitong8591/ars_magsac.
Related papers
- Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - NeFSAC: Neurally Filtered Minimal Samples [90.55214606751453]
NeFSAC is an efficient algorithm for neural filtering of motion-inconsistent and poorly-conditioned minimal samples.
NeFSAC can be plugged into any existing RANSAC-based pipeline.
We tested NeFSAC on more than 100k image pairs from three publicly available real-world datasets.
arXiv Detail & Related papers (2022-07-16T08:02:05Z) - Exponentially Tilted Gaussian Prior for Variational Autoencoder [3.52359746858894]
Recent studies show that probabilistic generative models can perform poorly on this task.
We propose the exponentially tilted Gaussian prior distribution for the Variational Autoencoder (VAE)
We show that our model produces high quality image samples which are more crisp than that of a standard Gaussian VAE.
arXiv Detail & Related papers (2021-11-30T18:28:19Z) - Exploring the Uncertainty Properties of Neural Networks' Implicit Priors
in the Infinite-Width Limit [47.324627920761685]
We use recent theoretical advances that characterize the function-space prior to an ensemble of infinitely-wide NNs as a Gaussian process.
This gives us a better understanding of the implicit prior NNs place on function space.
We also examine the calibration of previous approaches to classification with the NNGP.
arXiv Detail & Related papers (2020-10-14T18:41:54Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Learning Rates as a Function of Batch Size: A Random Matrix Theory
Approach to Neural Network Training [2.9649783577150837]
We study the effect of mini-batching on the loss landscape of deep neural networks using spiked, field-dependent random matrix theory.
We derive analytical expressions for the maximal descent and adaptive training regimens for smooth, non-Newton deep neural networks.
We validate our claims on the VGG/ResNet and ImageNet datasets.
arXiv Detail & Related papers (2020-06-16T11:55:45Z) - Choosing the Sample with Lowest Loss makes SGD Robust [19.08973384659313]
We propose a simple variant of the simple gradient descent (SGD) method in each step.
Vanilla represents a new algorithm that is however effectively minimizing a non-current sum with the smallest loss.
Our theoretical analysis of this idea for ML problems is backed up with small-scale neural network experiments.
arXiv Detail & Related papers (2020-01-10T05:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.