P1AC: Revisiting Absolute Pose From a Single Affine Correspondence
- URL: http://arxiv.org/abs/2011.08790v6
- Date: Sat, 29 Jun 2024 14:33:38 GMT
- Title: P1AC: Revisiting Absolute Pose From a Single Affine Correspondence
- Authors: Jonathan Ventura, Zuzana Kukelova, Torsten Sattler, Dániel Baráth,
- Abstract summary: We introduce the first general solution to the problem of estimating the pose of a calibrated camera.
The advantage of our approach is that it requires only a single correspondence.
We show that P1AC achieves more accurate results than the widely used P3P algorithm.
- Score: 38.350811942642565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Affine correspondences have traditionally been used to improve feature matching over wide baselines. While recent work has successfully used affine correspondences to solve various relative camera pose estimation problems, less attention has been given to their use in absolute pose estimation. We introduce the first general solution to the problem of estimating the pose of a calibrated camera given a single observation of an oriented point and an affine correspondence. The advantage of our approach (P1AC) is that it requires only a single correspondence, in comparison to the traditional point-based approach (P3P), significantly reducing the combinatorics in robust estimation. P1AC provides a general solution that removes restrictive assumptions made in prior work and is applicable to large-scale image-based localization. We propose a minimal solution to the P1AC problem and evaluate our novel solver on synthetic data, showing its numerical stability and performance under various types of noise. On standard image-based localization benchmarks we show that P1AC achieves more accurate results than the widely used P3P algorithm. Code for our method is available at https://github.com/jonathanventura/P1AC/ .
Related papers
- UNOPose: Unseen Object Pose Estimation with an Unposed RGB-D Reference Image [86.7128543480229]
We present a novel approach and benchmark, termed UNOPose, for unseen one-reference-based object pose estimation.
Building upon a coarse-to-fine paradigm, UNOPose constructs an SE(3)-invariant reference frame to standardize object representation.
We recalibrate the weight of each correspondence based on its predicted likelihood of being within the overlapping region.
arXiv Detail & Related papers (2024-11-25T05:36:00Z) - Quantity-Aware Coarse-to-Fine Correspondence for Image-to-Point Cloud
Registration [4.954184310509112]
Image-to-point cloud registration aims to determine the relative camera pose between an RGB image and a reference point cloud.
Matching individual points with pixels can be inherently ambiguous due to modality gaps.
We propose a framework to capture quantity-aware correspondences between local point sets and pixel patches.
arXiv Detail & Related papers (2023-07-14T03:55:54Z) - Poisson-Gaussian Holographic Phase Retrieval with Score-based Image
Prior [19.231581775644617]
We propose a new algorithm called "AWFS" that uses the accelerated Wirtinger flow (AWF) with a score function as generative prior.
We calculate the gradient of the log-likelihood function for PR and determine the Lipschitz constant.
We provide theoretical analysis that establishes a critical-point convergence guarantee for the proposed algorithm.
arXiv Detail & Related papers (2023-05-12T18:08:47Z) - PoseMatcher: One-shot 6D Object Pose Estimation by Deep Feature Matching [51.142988196855484]
We propose PoseMatcher, an accurate model free one-shot object pose estimator.
We create a new training pipeline for object to image matching based on a three-view system.
To enable PoseMatcher to attend to distinct input modalities, an image and a pointcloud, we introduce IO-Layer.
arXiv Detail & Related papers (2023-04-03T21:14:59Z) - On Relative Pose Recovery for Multi-Camera Systems [7.494426244735998]
We propose a complete solution to relative pose estimation from two ACs for multi-camera systems.
The solver generation is based on Cayley or quaternion parameterization for rotation and hidden variable technique to eliminate translation.
The proposed AC-based solvers and PC-based solvers are effective and efficient on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-24T00:39:57Z) - Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints [80.60538408386016]
Estimating relative camera poses from consecutive frames is a fundamental problem in visual odometry.
We propose an end-to-end trainable framework consisting of learnable modules for detection, feature extraction, matching and outlier rejection.
arXiv Detail & Related papers (2020-07-29T21:41:31Z) - Minimal Cases for Computing the Generalized Relative Pose using Affine
Correspondences [41.35179046936236]
We propose three novel solvers for estimating the relative pose of a multi-camera system from affine correspondences (ACs)
It is shown that the accuracy of the estimated poses is superior to the state-of-the-art techniques.
arXiv Detail & Related papers (2020-07-21T10:34:45Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.