Quantifying Epistemic Uncertainty in Absolute Pose Regression
- URL: http://arxiv.org/abs/2504.07260v1
- Date: Wed, 09 Apr 2025 20:32:45 GMT
- Title: Quantifying Epistemic Uncertainty in Absolute Pose Regression
- Authors: Fereidoon Zangeneh, Amit Dekel, Alessandro Pieropan, Patric Jensfelt,
- Abstract summary: Absolute pose regression offers a solution to estimating the camera pose given an image it views.<n>While an attractive solution in terms of memory and compute efficiency, absolute pose regression's predictions are inaccurate and unreliable outside the training domain.<n>We propose a novel method for quantifying the uncertainty of an absolute pose regression model by estimating the likelihood of observations within a variational framework.
- Score: 47.755553816864804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual relocalization is the task of estimating the camera pose given an image it views. Absolute pose regression offers a solution to this task by training a neural network, directly regressing the camera pose from image features. While an attractive solution in terms of memory and compute efficiency, absolute pose regression's predictions are inaccurate and unreliable outside the training domain. In this work, we propose a novel method for quantifying the epistemic uncertainty of an absolute pose regression model by estimating the likelihood of observations within a variational framework. Beyond providing a measure of confidence in predictions, our approach offers a unified model that also handles observation ambiguities, probabilistically localizing the camera in the presence of repetitive structures. Our method outperforms existing approaches in capturing the relation between uncertainty and prediction error.
Related papers
- Conditional Variational Autoencoders for Probabilistic Pose Regression [45.563533339332615]
We propose a probabilistic method to predict the posterior distribution of camera poses given an observed image.
Our proposed training strategy results in a generative model of camera poses given an image, which can be used to draw samples from the pose posterior distribution.
arXiv Detail & Related papers (2024-10-07T12:43:50Z) - Cameras as Rays: Pose Estimation via Ray Diffusion [54.098613859015856]
Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views.
We propose a distributed representation of camera pose that treats a camera as a bundle of rays.
Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D.
arXiv Detail & Related papers (2024-02-22T18:59:56Z) - On the Quantification of Image Reconstruction Uncertainty without
Training Data [5.057039869893053]
We propose a deep variational framework that leverages a deep generative model to learn an approximate posterior distribution.
We parameterize the target posterior using a flow-based model and minimize their Kullback-Leibler (KL) divergence to achieve accurate uncertainty estimation.
Our results indicate that our method provides reliable and high-quality image reconstruction with robust uncertainty estimation.
arXiv Detail & Related papers (2023-11-16T07:46:47Z) - Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging [88.20869695803631]
We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
arXiv Detail & Related papers (2022-02-10T18:59:56Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - MDN-VO: Estimating Visual Odometry with Confidence [34.8860186009308]
Visual Odometry (VO) is used in many applications including robotics and autonomous systems.
We propose a deep learning-based VO model to estimate 6-DoF poses, as well as a confidence model for these estimates.
Our experiments show that the proposed model exceeds state-of-the-art performance in addition to detecting failure cases.
arXiv Detail & Related papers (2021-12-23T19:26:04Z) - Aleatoric uncertainty for Errors-in-Variables models in deep regression [0.48733623015338234]
We show how the concept of Errors-in-Variables can be used in Bayesian deep regression.
We discuss the approach along various simulated and real examples.
arXiv Detail & Related papers (2021-05-19T12:37:02Z) - Confidence Adaptive Anytime Pixel-Level Recognition [86.75784498879354]
Anytime inference requires a model to make a progression of predictions which might be halted at any time.
We propose the first unified and end-to-end model approach for anytime pixel-level recognition.
arXiv Detail & Related papers (2021-04-01T20:01:57Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.