VSAC: Efficient and Accurate Estimator for H and F
- URL: http://arxiv.org/abs/2106.10240v1
- Date: Fri, 18 Jun 2021 17:04:57 GMT
- Title: VSAC: Efficient and Accurate Estimator for H and F
- Authors: Maksym Ivashechkin, Daniel Barath, Jiri Matas
- Abstract summary: VSAC is a RANSAC-type robust estimator with a number of novelties.
It is significantly faster than all its predecessors and runs on average in 1-2 ms, on a CPU.
It is two orders of magnitude faster and yet as precise as MAGSAC++, the currently most accurate estimator of two-view geometry.
- Score: 68.65610177368617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present VSAC, a RANSAC-type robust estimator with a number of novelties.
It benefits from the introduction of the concept of independent inliers that
improves significantly the efficacy of the dominant plane handling and, also,
allows near error-free rejection of incorrect models, without false positives.
The local optimization process and its application is improved so that it is
run on average only once. Further technical improvements include adaptive
sequential hypothesis verification and efficient model estimation via Gaussian
elimination. Experiments on four standard datasets show that VSAC is
significantly faster than all its predecessors and runs on average in 1-2 ms,
on a CPU. It is two orders of magnitude faster and yet as precise as MAGSAC++,
the currently most accurate estimator of two-view geometry. In the repeated
runs on EVD, HPatches, PhotoTourism, and Kusvod2 datasets, it never failed.
Related papers
- Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model [89.8764435351222]
We propose a new family of unbiased estimators called WTA-CRS, for matrix production with reduced variance.
Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.
arXiv Detail & Related papers (2023-05-24T15:52:08Z) - A Computational Exploration of Emerging Methods of Variable Importance
Estimation [0.0]
Estimating the importance of variables is an essential task in modern machine learning.
We propose a computational and theoretical exploration of the emerging methods of variable importance estimation.
The implementation has shown that PERF has the best performance in the case of highly correlated data.
arXiv Detail & Related papers (2022-08-05T20:00:56Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate
Estimation [29.27760413892272]
Post-click conversion, as a strong signal indicating the user preference, is salutary for building recommender systems.
Currently, most existing methods utilize counterfactual learning to debias recommender systems.
We propose a novel double learning approach for the MRDR estimator, which can convert the error imputation into the general CVR estimation.
arXiv Detail & Related papers (2021-05-28T06:59:49Z) - Approximate Cross-Validation with Low-Rank Data in High Dimensions [35.74302895575951]
Cross-validation is an important tool for model assessment.
ACV methods can lose both speed and accuracy in high dimensions unless sparsity structure is present in the data.
We develop a new algorithm for ACV that is fast and accurate in the presence of ALR data.
arXiv Detail & Related papers (2020-08-24T16:34:05Z) - AdamP: Slowing Down the Slowdown for Momentum Optimizers on
Scale-invariant Weights [53.8489656709356]
Normalization techniques are a boon for modern deep learning.
It is often overlooked, however, that the additional introduction of momentum results in a rapid reduction in effective step sizes for scale-invariant weights.
In this paper, we verify that the widely-adopted combination of the two ingredients lead to the premature decay of effective step sizes and sub-optimal model performances.
arXiv Detail & Related papers (2020-06-15T08:35:15Z) - DADA: Differentiable Automatic Data Augmentation [58.560309490774976]
We propose Differentiable Automatic Data Augmentation (DADA) which dramatically reduces the cost.
We conduct extensive experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets.
Results show our DADA is at least one order of magnitude faster than the state-of-the-art while achieving very comparable accuracy.
arXiv Detail & Related papers (2020-03-08T13:23:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.