Generalized Differentiable RANSAC
- URL: http://arxiv.org/abs/2212.13185v3
- Date: Fri, 8 Sep 2023 15:35:47 GMT
- Title: Generalized Differentiable RANSAC
- Authors: Tong Wei, Yash Patel, Alexander Shekhovtsov, Jiri Matas, Daniel Barath
- Abstract summary: $nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
- Score: 95.95627475224231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose $\nabla$-RANSAC, a generalized differentiable RANSAC that allows
learning the entire randomized robust estimation pipeline. The proposed
approach enables the use of relaxation techniques for estimating the gradients
in the sampling distribution, which are then propagated through a
differentiable solver. The trainable quality function marginalizes over the
scores from all the models estimated within $\nabla$-RANSAC to guide the
network learning accurate and useful inlier probabilities or to train feature
detection and matching networks. Our method directly maximizes the probability
of drawing a good hypothesis, allowing us to learn better sampling
distributions. We test $\nabla$-RANSAC on various real-world scenarios on
fundamental and essential matrix estimation, and 3D point cloud registration,
outdoors and indoors, with handcrafted and learning-based features. It is
superior to the state-of-the-art in terms of accuracy while running at a
similar speed to its less accurate alternatives. The code and trained models
are available at https://github.com/weitong8591/differentiable_ransac.
Related papers
- Revisiting Score Function Estimators for $k$-Subset Sampling [5.464421236280698]
We show how to efficiently compute the $k$-subset distribution's score function using a discrete Fourier transform.
The resulting estimator provides both exact samples and unbiased gradient estimates.
Experiments in feature selection show results competitive with current methods, despite weaker assumptions.
arXiv Detail & Related papers (2024-07-22T21:26:39Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Provably Stable Feature Rankings with SHAP and LIME [3.8642937395065124]
We devise attribution methods that ensure the most important features are ranked correctly with high probability.
We introduce efficient sampling algorithms for SHAP and LIME that guarantee the $K$ highest-ranked features have the proper ordering.
arXiv Detail & Related papers (2024-01-28T23:14:51Z) - Fast, Distribution-free Predictive Inference for Neural Networks with
Coverage Guarantees [25.798057062452443]
This paper introduces a novel, computationally-efficient algorithm for predictive inference (PI)
It requires no distributional assumptions on the data and can be computed faster than existing bootstrap-type methods for neural networks.
arXiv Detail & Related papers (2023-06-11T04:03:58Z) - Learning Representations on the Unit Sphere: Investigating Angular
Gaussian and von Mises-Fisher Distributions for Online Continual Learning [7.145581090959242]
We propose a memory-based representation learning technique equipped with our new loss functions.
We demonstrate that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries.
arXiv Detail & Related papers (2023-06-06T02:38:01Z) - Towards Understanding and Improving GFlowNet Training [71.85707593318297]
We introduce an efficient evaluation strategy to compare the learned sampling distribution to the target reward distribution.
We propose prioritized replay training of high-reward $x$, relative edge flow policy parametrization, and a novel guided trajectory balance objective.
arXiv Detail & Related papers (2023-05-11T22:50:41Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - The Bures Metric for Generative Adversarial Networks [10.69910379275607]
Generative Adversarial Networks (GANs) are performant generative methods yielding high-quality samples.
We propose to match the real batch diversity to the fake batch diversity.
We observe that diversity matching reduces mode collapse substantially and has a positive effect on the sample quality.
arXiv Detail & Related papers (2020-06-16T12:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.