SupeRANSAC: One RANSAC to Rule Them All
- URL: http://arxiv.org/abs/2506.04803v1
- Date: Thu, 05 Jun 2025 09:30:27 GMT
- Title: SupeRANSAC: One RANSAC to Rule Them All
- Authors: Daniel Barath,
- Abstract summary: SupeRANSAC is a novel unified RANSAC pipeline.<n>We provide a detailed analysis of the techniques that make RANSAC effective for specific vision tasks.<n>We demonstrate significant performance improvements over the state-of-the-art on multiple problems and datasets.
- Score: 40.60228747873885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust estimation is a cornerstone in computer vision, particularly for tasks like Structure-from-Motion and Simultaneous Localization and Mapping. RANSAC and its variants are the gold standard for estimating geometric models (e.g., homographies, relative/absolute poses) from outlier-contaminated data. Despite RANSAC's apparent simplicity, achieving consistently high performance across different problems is challenging. While recent research often focuses on improving specific RANSAC components (e.g., sampling, scoring), overall performance is frequently more influenced by the "bells and whistles" (i.e., the implementation details and problem-specific optimizations) within a given library. Popular frameworks like OpenCV and PoseLib demonstrate varying performance, excelling in some tasks but lagging in others. We introduce SupeRANSAC, a novel unified RANSAC pipeline, and provide a detailed analysis of the techniques that make RANSAC effective for specific vision tasks, including homography, fundamental/essential matrix, and absolute/rigid pose estimation. SupeRANSAC is designed for consistent accuracy across these tasks, improving upon the best existing methods by, for example, 6 AUC points on average for fundamental matrix estimation. We demonstrate significant performance improvements over the state-of-the-art on multiple problems and datasets. Code: https://github.com/danini/superansac
Related papers
- RANSAC Back to SOTA: A Two-stage Consensus Filtering for Real-time 3D Registration [15.81035895734261]
Correspondence-based point cloud registration (PCR) plays a key role in robotics and computer vision.<n>We propose a two-stage consensus filtering (TCF) that elevates RANSAC to state-of-the-art (SOTA) speed and accuracy.
arXiv Detail & Related papers (2024-10-21T06:46:49Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - KVN: Keypoints Voting Network with Differentiable RANSAC for Stereo Pose
Estimation [1.1603243575080535]
We introduce a differentiable RANSAC layer into a well-known monocular pose estimation network.
We show that the differentiable RANSAC plays a role in the accuracy of the proposed layer.
arXiv Detail & Related papers (2023-07-21T12:43:07Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Deep Active Ensemble Sampling For Image Classification [8.31483061185317]
Active learning frameworks aim to reduce the cost of data annotation by actively requesting the labeling for the most informative data points.
Some proposed approaches include uncertainty-based techniques, geometric methods, implicit combination of uncertainty-based and geometric approaches.
We present an innovative integration of recent progress in both uncertainty-based and geometric frameworks to enable an efficient exploration/exploitation trade-off in sample selection strategy.
Our framework provides two advantages: (1) accurate posterior estimation, and (2) tune-able trade-off between computational overhead and higher accuracy.
arXiv Detail & Related papers (2022-10-11T20:20:20Z) - Space-Partitioning RANSAC [30.255457622022487]
A new algorithm is proposed to accelerate RANSAC model quality calculations.
The method is based on partitioning the joint correspondence space, e.g., 2D-2D point correspondences, into a pair of regular grids.
It reduces the RANSAC run-time by 41% with provably no deterioration in the accuracy.
arXiv Detail & Related papers (2021-11-24T10:10:04Z) - USACv20: robust essential, fundamental and homography matrix estimation [68.65610177368617]
We review the most recent RANSAC-like hypothesize-and-verify robust estimators.
The best performing ones are combined to create a state-of-the-art version of the Universal Sample Consensus (USAC) algorithm.
A proposed method, USACv20, is tested on eight publicly available real-world datasets.
arXiv Detail & Related papers (2021-04-11T16:27:02Z) - How to distribute data across tasks for meta-learning? [59.608652082495624]
We show that the optimal number of data points per task depends on the budget, but it converges to a unique constant value for large budgets.
Our results suggest a simple and efficient procedure for data collection.
arXiv Detail & Related papers (2021-03-15T15:38:47Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.