Making Affine Correspondences Work in Camera Geometry Computation
- URL: http://arxiv.org/abs/2007.10032v1
- Date: Mon, 20 Jul 2020 12:07:48 GMT
- Title: Making Affine Correspondences Work in Camera Geometry Computation
- Authors: Daniel Barath, Michal Polic, Wolfgang F\"orstner, Torsten Sattler,
Tomas Pajdla, Zuzana Kukelova
- Abstract summary: Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
- Score: 62.7633180470428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local features e.g. SIFT and its affine and learned variants provide
region-to-region rather than point-to-point correspondences. This has recently
been exploited to create new minimal solvers for classical problems such as
homography, essential and fundamental matrix estimation. The main advantage of
such solvers is that their sample size is smaller, e.g., only two instead of
four matches are required to estimate a homography. Works proposing such
solvers often claim a significant improvement in run-time thanks to fewer
RANSAC iterations. We show that this argument is not valid in practice if the
solvers are used naively. To overcome this, we propose guidelines for effective
use of region-to-region matches in the course of a full model estimation
pipeline. We propose a method for refining the local feature geometries by
symmetric intensity-based matching, combine uncertainty propagation inside
RANSAC with preemptive model verification, show a general scheme for computing
uncertainty of minimal solvers results, and adapt the sample cheirality check
for homography estimation. Our experiments show that affine solvers can achieve
accuracy comparable to point-based solvers at faster run-times when following
our guidelines. We make code available at
https://github.com/danini/affine-correspondences-for-camera-geometry.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid Registration [76.40993825836222]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.
The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - DAC: Detector-Agnostic Spatial Covariances for Deep Local Features [11.494662473750505]
Current deep visual local feature detectors do not model the spatial uncertainty of detected features.
We propose two post-hoc covariance estimates that can be plugged into any pretrained deep feature detector.
arXiv Detail & Related papers (2023-05-20T17:43:09Z) - Space-Partitioning RANSAC [30.255457622022487]
A new algorithm is proposed to accelerate RANSAC model quality calculations.
The method is based on partitioning the joint correspondence space, e.g., 2D-2D point correspondences, into a pair of regular grids.
It reduces the RANSAC run-time by 41% with provably no deterioration in the accuracy.
arXiv Detail & Related papers (2021-11-24T10:10:04Z) - Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Minimax
Problems [80.46370778277186]
Large scale convex-concave minimax problems arise in numerous applications, including game theory, robust training, and training of generative adversarial networks.
We develop a communication-efficient distributed extragrad algorithm, LocalAdaSient, with an adaptive learning rate suitable for solving convex-concave minimax problem in the.
Server model.
We demonstrate its efficacy through several experiments in both the homogeneous and heterogeneous settings.
arXiv Detail & Related papers (2021-06-18T09:42:05Z) - Finding Geometric Models by Clustering in the Consensus Space [61.65661010039768]
We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies.
We present a number of applications where the use of multiple geometric models improves accuracy.
These include pose estimation from multiple generalized homographies; trajectory estimation of fast-moving objects.
arXiv Detail & Related papers (2021-03-25T14:35:07Z) - HSolo: Homography from a single affine aware correspondence [0.0]
We present a novel procedure for homography estimation that is particularly well suited for inlier-poor domains.
Especially at low inlier rates, our novel algorithm provides dramatic performance improvements.
arXiv Detail & Related papers (2020-09-10T17:13:23Z) - Entropic gradient descent algorithms and wide flat minima [6.485776570966397]
We show analytically that there exist Bayes optimal pointwise estimators which correspond to minimizers belonging to wide flat regions.
We extend the analysis to the deep learning scenario by extensive numerical validations.
An easy to compute flatness measure shows a clear correlation with test accuracy.
arXiv Detail & Related papers (2020-06-14T13:22:19Z) - Multi-View Optimization of Local Feature Geometry [70.18863787469805]
We address the problem of refining the geometry of local image features from multiple views without known scene or camera geometry.
Our proposed method naturally complements the traditional feature extraction and matching paradigm.
We show that our method consistently improves the triangulation and camera localization performance for both hand-crafted and learned local features.
arXiv Detail & Related papers (2020-03-18T17:22:11Z) - Robust Learning Rate Selection for Stochastic Optimization via Splitting
Diagnostic [5.395127324484869]
SplitSGD is a new dynamic learning schedule for optimization.
The method decreases the learning rate for better adaptation to the local geometry of the objective function.
It essentially does not incur additional computational cost than standard SGD.
arXiv Detail & Related papers (2019-10-18T19:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.