Uncertainty-Aware Camera Pose Estimation from Points and Lines
- URL: http://arxiv.org/abs/2107.03890v1
- Date: Thu, 8 Jul 2021 15:19:36 GMT
- Title: Uncertainty-Aware Camera Pose Estimation from Points and Lines
- Authors: Alexander Vakhitov, Luis Ferraz Colomina, Antonio Agudo, Francesc
Moreno-Noguer
- Abstract summary: Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
- Score: 101.03675842534415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perspective-n-Point-and-Line (P$n$PL) algorithms aim at fast, accurate, and
robust camera localization with respect to a 3D model from 2D-3D feature
correspondences, being a major part of modern robotic and AR/VR systems.
Current point-based pose estimation methods use only 2D feature detection
uncertainties, and the line-based methods do not take uncertainties into
account. In our setup, both 3D coordinates and 2D projections of the features
are considered uncertain. We propose PnP(L) solvers based on EPnP and DLS for
the uncertainty-aware pose estimation. We also modify motion-only bundle
adjustment to take 3D uncertainties into account. We perform exhaustive
synthetic and real experiments on two different visual odometry datasets. The
new PnP(L) methods outperform the state-of-the-art on real data in isolation,
showing an increase in mean translation accuracy by 18% on a representative
subset of KITTI, while the new uncertain refinement improves pose accuracy for
most of the solvers, e.g. decreasing mean translation error for the EPnP by 16%
compared to the standard refinement on the same dataset. The code is available
at https://alexandervakhitov.github.io/uncertain-pnp/.
Related papers
- SCIPaD: Incorporating Spatial Clues into Unsupervised Pose-Depth Joint Learning [17.99904937160487]
We introduce SCIPaD, a novel approach that incorporates spatial clues for unsupervised depth-pose joint learning.
SCIPaD achieves a reduction of 22.2% in average translation error and 34.8% in average angular error for camera pose estimation task on the KITTI Odometry dataset.
arXiv Detail & Related papers (2024-07-07T06:52:51Z) - Geometric Transformation Uncertainty for Improving 3D Fetal Brain Pose Prediction from Freehand 2D Ultrasound Videos [0.8579241568505183]
We propose an uncertainty-aware deep learning model for automated 3D plane localization in 2D fetal brain images.
Our proposed method, QAERTS, demonstrates superior pose estimation accuracy than the state-of-the-art and most of the uncertainty-based approaches.
arXiv Detail & Related papers (2024-05-21T22:42:08Z) - UPose3D: Uncertainty-Aware 3D Human Pose Estimation with Cross-View and Temporal Cues [55.69339788566899]
UPose3D is a novel approach for multi-view 3D human pose estimation.
It improves robustness and flexibility without requiring direct 3D annotations.
arXiv Detail & Related papers (2024-04-23T00:18:00Z) - LInKs "Lifting Independent Keypoints" -- Partial Pose Lifting for
Occlusion Handling with Improved Accuracy in 2D-3D Human Pose Estimation [4.648549457266638]
We present LInKs, a novel unsupervised learning method to recover 3D human poses from 2D kinematic skeletons.
Our approach follows a unique two-step process, which involves first lifting the occluded 2D pose to the 3D domain.
This lift-then-fill approach leads to significantly more accurate results compared to models that complete the pose in 2D space alone.
arXiv Detail & Related papers (2023-09-13T18:28:04Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - Soft Expectation and Deep Maximization for Image Feature Detection [68.8204255655161]
We propose SEDM, an iterative semi-supervised learning process that flips the question and first looks for repeatable 3D points, then trains a detector to localize them in image space.
Our results show that this new model trained using SEDM is able to better localize the underlying 3D points in a scene.
arXiv Detail & Related papers (2021-04-21T00:35:32Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - Learning 2D-3D Correspondences To Solve The Blind Perspective-n-Point
Problem [98.92148855291363]
This paper proposes a deep CNN model which simultaneously solves for both 6-DoF absolute camera pose 2D--3D correspondences.
Tests on both real and simulated data have shown that our method substantially outperforms existing approaches.
arXiv Detail & Related papers (2020-03-15T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.