A Consistency-Based Loss for Deep Odometry Through Uncertainty
Propagation
- URL: http://arxiv.org/abs/2107.00366v1
- Date: Thu, 1 Jul 2021 11:09:20 GMT
- Title: A Consistency-Based Loss for Deep Odometry Through Uncertainty
Propagation
- Authors: Hamed Damirchi, Rooholla Khorrambakht, Hamid D. Taghirad, and Behzad
Moshiri
- Abstract summary: The uncertainty over each output can be derived to weigh the different loss terms in a maximum likelihood setting.
In this paper, we associate uncertainties with the output poses of a deep odometry network and propagate the uncertainties through each iteration.
We provide quantitative and qualitative analysis of pose estimates and show that our method surpasses the accuracy of the state-of-the-art Visual Odometry approaches.
- Score: 0.3359875577705538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The incremental poses computed through odometry can be integrated over time
to calculate the pose of a device with respect to an initial location. The
resulting global pose may be used to formulate a second, consistency based,
loss term in a deep odometry setting. In such cases where multiple losses are
imposed on a network, the uncertainty over each output can be derived to weigh
the different loss terms in a maximum likelihood setting. However, when
imposing a constraint on the integrated transformation, due to how only
odometry is estimated at each iteration of the algorithm, there is no
information about the uncertainty associated with the global pose to weigh the
global loss term. In this paper, we associate uncertainties with the output
poses of a deep odometry network and propagate the uncertainties through each
iteration. Our goal is to use the estimated covariance matrix at each
incremental step to weigh the loss at the corresponding step while weighting
the global loss term using the compounded uncertainty. This formulation
provides an adaptive method to weigh the incremental and integrated loss terms
against each other, noting the increase in uncertainty as new estimates arrive.
We provide quantitative and qualitative analysis of pose estimates and show
that our method surpasses the accuracy of the state-of-the-art Visual Odometry
approaches. Then, uncertainty estimates are evaluated and comparisons against
fixed baselines are provided. Finally, the uncertainty values are used in a
realistic example to show the effectiveness of uncertainty quantification for
localization.
Related papers
- Understanding Uncertainty Sampling [7.32527270949303]
Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples.
We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function.
We provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings.
arXiv Detail & Related papers (2023-07-06T01:57:37Z) - Conformal Prediction with Missing Values [19.18178194789968]
We first show that the marginal coverage guarantee of conformal prediction holds on imputed data for any missingness distribution.
We then show that a universally consistent quantile regression algorithm trained on the imputed data is Bayes optimal for the pinball risk.
arXiv Detail & Related papers (2023-06-05T09:28:03Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Loss Minimization through the Lens of Outcome Indistinguishability [11.709566373491619]
We present a new perspective on convex loss and the recent notion of Omniprediction.
By design, Loss OI implies omniprediction in a direct and intuitive manner.
We show that Loss OI for the important set of losses arising from Generalized Models, without requiring full multicalibration.
arXiv Detail & Related papers (2022-10-16T22:25:27Z) - Robust Depth Completion with Uncertainty-Driven Loss Functions [60.9237639890582]
We introduce uncertainty-driven loss functions to improve the robustness of depth completion and handle the uncertainty in depth completion.
Our method has been tested on KITTI Depth Completion Benchmark and achieved the state-of-the-art robustness performance in terms of MAE, IMAE, and IRMSE metrics.
arXiv Detail & Related papers (2021-12-15T05:22:34Z) - Gradient-Based Quantification of Epistemic Uncertainty for Deep Object
Detectors [8.029049649310213]
We introduce novel gradient-based uncertainty metrics and investigate them for different object detection architectures.
Experiments show significant improvements in true positive / false positive discrimination and prediction of intersection over union.
We also find improvement over Monte-Carlo dropout uncertainty metrics and further significant boosts by aggregating different sources of uncertainty metrics.
arXiv Detail & Related papers (2021-07-09T16:04:11Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Localization Uncertainty Estimation for Anchor-Free Object Detection [48.931731695431374]
There are several limitations of the existing uncertainty estimation methods for anchor-based object detection.
We propose a new localization uncertainty estimation method called UAD for anchor-free object detection.
Our method captures the uncertainty in four directions of box offsets that are homogeneous, so that it can tell which direction is uncertain.
arXiv Detail & Related papers (2020-06-28T13:49:30Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.