Can You Trust Your Pose? Confidence Estimation in Visual Localization
- URL: http://arxiv.org/abs/2010.00347v1
- Date: Thu, 1 Oct 2020 12:25:48 GMT
- Title: Can You Trust Your Pose? Confidence Estimation in Visual Localization
- Authors: Luca Ferranti, Xiaotian Li, Jani Boutellier, Juho Kannala
- Abstract summary: We aim at quantifying how reliable the visually estimated pose is.
We also show that the proposed techniques can be used to accomplish a secondary goal: improving the accuracy of existing pose estimation pipelines.
The proposed approach is computationally light-weight and adds only a negligible increase to the computational effort of pose estimation.
- Score: 17.23405466562484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Camera pose estimation in large-scale environments is still an open question
and, despite recent promising results, it may still fail in some situations.
The research so far has focused on improving subcomponents of estimation
pipelines, to achieve more accurate poses. However, there is no guarantee for
the result to be correct, even though the correctness of pose estimation is
critically important in several visual localization applications,such as in
autonomous navigation. In this paper we bring to attention a novel research
question, pose confidence estimation,where we aim at quantifying how reliable
the visually estimated pose is. We develop a novel confidence measure to fulfil
this task and show that it can be flexibly applied to different datasets,indoor
or outdoor, and for various visual localization pipelines.We also show that the
proposed techniques can be used to accomplish a secondary goal: improving the
accuracy of existing pose estimation pipelines. Finally, the proposed approach
is computationally light-weight and adds only a negligible increase to the
computational effort of pose estimation.
Related papers
- End-to-End Probabilistic Geometry-Guided Regression for 6DoF Object Pose Estimation [5.21401636701889]
State-of-the-art 6D object pose estimators directly predict an object pose given an object observation.
We reformulate the state-of-the-art algorithm GDRNPP and introduce EPRO-GDR.
Our solution shows that predicting a pose distribution instead of a single pose can improve state-of-the-art single-view pose estimation.
arXiv Detail & Related papers (2024-09-18T09:11:31Z) - DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses [59.51874686414509]
Current approaches approximate the continuous pose representation with a large number of discrete pose hypotheses.
We present a Deep Voxel Matching Network (DVMNet) that eliminates the need for pose hypotheses and computes the relative object pose in a single pass.
Our method delivers more accurate relative pose estimates for novel objects at a lower computational cost compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-03-20T15:41:32Z) - PoseMatcher: One-shot 6D Object Pose Estimation by Deep Feature Matching [51.142988196855484]
We propose PoseMatcher, an accurate model free one-shot object pose estimator.
We create a new training pipeline for object to image matching based on a three-view system.
To enable PoseMatcher to attend to distinct input modalities, an image and a pointcloud, we introduce IO-Layer.
arXiv Detail & Related papers (2023-04-03T21:14:59Z) - VL4Pose: Active Learning Through Out-Of-Distribution Detection For Pose
Estimation [79.50280069412847]
We introduce VL4Pose, a first principles approach for active learning through out-of-distribution detection.
Our solution involves modelling the pose through a simple parametric Bayesian network trained via maximum likelihood estimation.
We perform qualitative and quantitative experiments on three datasets: MPII, LSP and ICVL, spanning human and hand pose estimation.
arXiv Detail & Related papers (2022-10-12T09:03:55Z) - Ki-Pode: Keypoint-based Implicit Pose Distribution Estimation of Rigid
Objects [1.209625228546081]
We propose a novel pose distribution estimation method.
An implicit formulation of the probability distribution over object pose is derived from an intermediary representation of an object as a set of keypoints.
The method has been evaluated on the task of rotation distribution estimation on the YCB-V and T-LESS datasets.
arXiv Detail & Related papers (2022-09-20T11:59:05Z) - Visual-based Positioning and Pose Estimation [0.0]
Recent advances in deep learning and computer vision offer an excellent opportunity to investigate high-level visual analysis tasks.
Human localization and human pose estimation has significantly improved in recent reports, but they are not perfect and erroneous localization and pose estimation can be expected among video frames.
We explored and developed two working pipelines that suited the visual-based positioning and pose estimation tasks.
arXiv Detail & Related papers (2022-04-20T05:30:34Z) - SporeAgent: Reinforced Scene-level Plausibility for Object Pose
Refinement [28.244027792644097]
While depth- and RGB-based pose refinement approaches increase the accuracy of the resulting pose estimates, they are susceptible to ambiguity as they consider visual alignment.
We show that considering plausibility reduces ambiguity and, in consequence, allows poses to be more accurately predicted in cluttered environments.
Experiments on the LINEMOD and YCB-VIDEO datasets demonstrate the state-of-the-art performance of our depth-based refinement approach.
arXiv Detail & Related papers (2022-01-01T20:26:19Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Learning Dynamics via Graph Neural Networks for Human Pose Estimation
and Tracking [98.91894395941766]
We propose a novel online approach to learning the pose dynamics, which are independent of pose detections in current fame.
Specifically, we derive this prediction of dynamics through a graph neural network(GNN) that explicitly accounts for both spatial-temporal and visual information.
Experiments on PoseTrack 2017 and PoseTrack 2018 datasets demonstrate that the proposed method achieves results superior to the state of the art on both human pose estimation and tracking tasks.
arXiv Detail & Related papers (2021-06-07T16:36:50Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z) - A New Distributional Ranking Loss With Uncertainty: Illustrated in
Relative Depth Estimation [0.0]
We propose a new approach for the problem of relative depth estimation from a single image.
Instead of directly regressing over depth scores, we formulate the problem as estimation of a probability distribution over depth.
To train our model, we propose a new ranking loss, Distributional Loss, which tries to increase the probability of farther pixel's depth being greater than the closer pixel's depth.
arXiv Detail & Related papers (2020-10-14T13:47:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.