USACv20: robust essential, fundamental and homography matrix estimation
- URL: http://arxiv.org/abs/2104.05044v1
- Date: Sun, 11 Apr 2021 16:27:02 GMT
- Title: USACv20: robust essential, fundamental and homography matrix estimation
- Authors: Maksym Ivashechkin, Daniel Barath, Jiri Matas
- Abstract summary: We review the most recent RANSAC-like hypothesize-and-verify robust estimators.
The best performing ones are combined to create a state-of-the-art version of the Universal Sample Consensus (USAC) algorithm.
A proposed method, USACv20, is tested on eight publicly available real-world datasets.
- Score: 68.65610177368617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We review the most recent RANSAC-like hypothesize-and-verify robust
estimators. The best performing ones are combined to create a state-of-the-art
version of the Universal Sample Consensus (USAC) algorithm. A recent objective
is to implement a modular and optimized framework, making future RANSAC modules
easy to be included. The proposed method, USACv20, is tested on eight publicly
available real-world datasets, estimating homographies, fundamental and
essential matrices. On average, USACv20 leads to the most geometrically
accurate models and it is the fastest in comparison to the state-of-the-art
robust estimators. All reported properties improved performance of original
USAC algorithm significantly. The pipeline will be made available after
publication.
Related papers
- Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning [55.96599486604344]
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process.
We use Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals.
The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly generated step-level preference data.
arXiv Detail & Related papers (2024-05-01T11:10:24Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval [11.38022203865326]
SPLADE model provides highly sparse representations and competitive results with respect to state-of-the-art dense and sparse approaches.
We modify the pooling mechanism, benchmark a model solely based on document expansion, and introduce models trained with distillation.
Overall, SPLADE is considerably improved with more than $9$% gains on NDCG@10 on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark.
arXiv Detail & Related papers (2021-09-21T10:43:42Z) - COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing [27.870537087888334]
We propose a novel Arbitrary-Sampling neTwork, dubbed COAST, to solve problems of arbitrary-sampling (including unseen sampling matrices) with one single model.
COAST is able to handle arbitrary sampling matrices with one single model and to achieve state-of-the-art performance with fast speed.
arXiv Detail & Related papers (2021-07-15T10:05:00Z) - VSAC: Efficient and Accurate Estimator for H and F [68.65610177368617]
VSAC is a RANSAC-type robust estimator with a number of novelties.
It is significantly faster than all its predecessors and runs on average in 1-2 ms, on a CPU.
It is two orders of magnitude faster and yet as precise as MAGSAC++, the currently most accurate estimator of two-view geometry.
arXiv Detail & Related papers (2021-06-18T17:04:57Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - GRAD-MATCH: A Gradient Matching Based Data Subset Selection for
Efficient Learning [23.75284126177203]
We propose a general framework, GRAD-MATCH, which finds subsets that closely match the gradient of the training or validation set.
We show that GRAD-MATCH significantly and consistently outperforms several recent data-selection algorithms.
arXiv Detail & Related papers (2021-02-27T04:09:32Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - C-SURE: Shrinkage Estimator and Prototype Classifier for Complex-Valued
Deep Learning [15.906530504220179]
We propose C-SURE, a novel Stein's unbiased risk estimate (SURE) of the JS estimator on the manifold of complex-valued data.
C-SURE is more accurate and robust than SurReal, and the shrinkage estimator is always better than MLE for the same prototype classifier.
arXiv Detail & Related papers (2020-06-22T20:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.