NeFSAC: Neurally Filtered Minimal Samples
- URL: http://arxiv.org/abs/2207.07872v1
- Date: Sat, 16 Jul 2022 08:02:05 GMT
- Title: NeFSAC: Neurally Filtered Minimal Samples
- Authors: Luca Cavalli, Marc Pollefeys, Daniel Barath
- Abstract summary: NeFSAC is an efficient algorithm for neural filtering of motion-inconsistent and poorly-conditioned minimal samples.
NeFSAC can be plugged into any existing RANSAC-based pipeline.
We tested NeFSAC on more than 100k image pairs from three publicly available real-world datasets.
- Score: 90.55214606751453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since RANSAC, a great deal of research has been devoted to improving both its
accuracy and run-time. Still, only a few methods aim at recognizing invalid
minimal samples early, before the often expensive model estimation and quality
calculation are done. To this end, we propose NeFSAC, an efficient algorithm
for neural filtering of motion-inconsistent and poorly-conditioned minimal
samples. We train NeFSAC to predict the probability of a minimal sample leading
to an accurate relative pose, only based on the pixel coordinates of the image
correspondences. Our neural filtering model learns typical motion patterns of
samples which lead to unstable poses, and regularities in the possible motions
to favour well-conditioned and likely-correct samples. The novel lightweight
architecture implements the main invariants of minimal samples for pose
estimation, and a novel training scheme addresses the problem of extreme class
imbalance. NeFSAC can be plugged into any existing RANSAC-based pipeline. We
integrate it into USAC and show that it consistently provides strong speed-ups
even under extreme train-test domain gaps - for example, the model trained for
the autonomous driving scenario works on PhotoTourism too. We tested NeFSAC on
more than 100k image pairs from three publicly available real-world datasets
and found that it leads to one order of magnitude speed-up, while often finding
more accurate results than USAC alone. The source code is available at
https://github.com/cavalli1234/NeFSAC.
Related papers
- Fine Structure-Aware Sampling: A New Sampling Training Scheme for
Pixel-Aligned Implicit Models in Single-View Human Reconstruction [105.46091601932524]
We introduce Fine Structured-Aware Sampling (FSS) to train pixel-aligned implicit models for single-view human reconstruction.
FSS proactively adapts to the thickness and complexity of surfaces.
It also proposes a mesh thickness loss signal for pixel-aligned implicit models.
arXiv Detail & Related papers (2024-02-29T14:26:46Z) - FewSOME: One-Class Few Shot Anomaly Detection with Siamese Networks [0.5735035463793008]
'Few Shot anOMaly detection' (FewSOME) is a deep One-Class Anomaly Detection algorithm with the ability to accurately detect anomalies.
FewSOME is aided by pretrained weights with an architecture based on Siamese Networks.
Our experiments demonstrate FewSOME performs at state-of-the-art level on benchmark datasets.
arXiv Detail & Related papers (2023-01-17T15:32:34Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Robust Few-shot Learning Without Using any Adversarial Samples [19.34427461937382]
A few efforts have been made to combine the few-shot problem with the robustness objective using sophisticated Meta-Learning techniques.
We propose a simple but effective alternative that does not require any adversarial samples.
Inspired by the cognitive decision-making process in humans, we enforce high-level feature matching between the base class data and their corresponding low-frequency samples.
arXiv Detail & Related papers (2022-11-03T05:58:26Z) - Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks [86.55317144826179]
Previous methods always leverage the transferable adversarial examples as the model fingerprint.
We propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC)
SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning.
arXiv Detail & Related papers (2022-10-21T02:07:50Z) - Adaptive Reordering Sampler with Neurally Guided MAGSAC [63.139445467355934]
We propose a new sampler for robust estimators that always selects the sample with the highest probability of consisting only of inliers.
After every unsuccessful iteration, the inlier probabilities are updated in a principled way via a Bayesian approach.
We introduce a new loss that exploits, in a geometrically justifiable manner, the orientation and scale that can be estimated for any type of feature.
arXiv Detail & Related papers (2021-11-28T10:16:38Z) - Learnable Locality-Sensitive Hashing for Video Anomaly Detection [44.19433917039249]
Video anomaly detection (VAD) mainly refers to identifying anomalous events that have not occurred in the training set where only normal samples are available.
We propose a novel distance-based VAD method to take advantage of all the available normal data efficiently and flexibly.
arXiv Detail & Related papers (2021-11-15T15:25:45Z) - On the Difficulty of Membership Inference Attacks [11.172550334631921]
Recent studies propose membership inference (MI) attacks on deep models.
Despite their apparent success, these studies only report accuracy, precision, and recall of the positive class (member class)
We show that the way the MI attack performance has been reported is often misleading because they suffer from high false positive rate or false alarm rate (FAR) that has not been reported.
arXiv Detail & Related papers (2020-05-27T23:09:17Z) - Frustratingly Simple Few-Shot Object Detection [98.42824677627581]
We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task.
Such a simple approach outperforms the meta-learning methods by roughly 220 points on current benchmarks.
arXiv Detail & Related papers (2020-03-16T00:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.