Towards Explaining Uncertainty Estimates in Point Cloud Registration
- URL: http://arxiv.org/abs/2412.20612v1
- Date: Sun, 29 Dec 2024 23:03:44 GMT
- Title: Towards Explaining Uncertainty Estimates in Point Cloud Registration
- Authors: Ziyuan Qin, Jongseok Lee, Rudolph Triebel,
- Abstract summary: Iterative Closest Point (ICP) is a commonly used algorithm to estimate transformation between two point clouds.
We propose a method that can explain why a probabilistic ICP method produced a particular output.
- Score: 30.547037499332742
- License:
- Abstract: Iterative Closest Point (ICP) is a commonly used algorithm to estimate transformation between two point clouds. The key idea of this work is to leverage recent advances in explainable AI for probabilistic ICP methods that provide uncertainty estimates. Concretely, we propose a method that can explain why a probabilistic ICP method produced a particular output. Our method is based on kernel SHAP (SHapley Additive exPlanations). With this, we assign an importance value to common sources of uncertainty in ICP such as sensor noise, occlusion, and ambiguous environments. The results of the experiment show that this explanation method can reasonably explain the uncertainty sources, providing a step towards robots that know when and why they failed in a human interpretable manner
Related papers
- Uncertainty Quantification in Stereo Matching [61.73532883992135]
We propose a new framework for stereo matching and its uncertainty quantification.
We adopt Bayes risk as a measure of uncertainty and estimate data and model uncertainty separately.
We apply our uncertainty method to improve prediction accuracy by selecting data points with small uncertainties.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Explaining Predictive Uncertainty by Exposing Second-Order Effects [13.83164409095901]
We present a new method for explaining predictive uncertainty based on second-order effects.
Our method is generally applicable, allowing for turning common attribution techniques into powerful second-order uncertainty explainers.
arXiv Detail & Related papers (2024-01-30T21:02:21Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Safe motion planning with environment uncertainty [1.4824891788575418]
We present an approach for safe motion planning under robot state and environment uncertainties.
We first develop an approach that accounts for the landmark uncertainties during robot localization.
We then extend the state-of-the-art by computing an exact expression for the collision probability.
arXiv Detail & Related papers (2023-05-10T09:29:41Z) - Deep Bayesian ICP Covariance Estimation [3.5136071950790737]
Iterative Closest Point (ICP) point cloud registration algorithm is essential for state estimation and sensor fusion purposes.
We argue that a major source of error for ICP is in the input data itself, from the sensor noise to the scene geometry.
Benefiting from recent developments in deep learning for point clouds, we propose a data-driven approach to learn an error model for ICP.
arXiv Detail & Related papers (2022-02-23T16:42:04Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Stein ICP for Uncertainty Estimation in Point Cloud Matching [41.22194677919566]
Quantification of uncertainty in point cloud matching is critical in many tasks such as pose estimation, sensor fusion, and grasping.
Iterative closest point (ICP) is a commonly used pose estimation algorithm which provides a point estimate of the transformation between two point clouds.
We propose a new algorithm to align two point clouds that can precisely estimate the uncertainty of ICP's transformation parameters.
arXiv Detail & Related papers (2021-06-07T01:07:34Z) - Localization Uncertainty Estimation for Anchor-Free Object Detection [48.931731695431374]
There are several limitations of the existing uncertainty estimation methods for anchor-based object detection.
We propose a new localization uncertainty estimation method called UAD for anchor-free object detection.
Our method captures the uncertainty in four directions of box offsets that are homogeneous, so that it can tell which direction is uncertain.
arXiv Detail & Related papers (2020-06-28T13:49:30Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.