Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation
- URL: http://arxiv.org/abs/2307.09929v1
- Date: Wed, 19 Jul 2023 12:11:15 GMT
- Title: Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation
- Authors: Mochu Xiang, Jing Zhang, Nick Barnes, Yuchao Dai
- Abstract summary: The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
- Score: 50.920911532133154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effectively measuring and modeling the reliability of a trained model is
essential to the real-world deployment of monocular depth estimation (MDE)
models. However, the intrinsic ill-posedness and ordinal-sensitive nature of
MDE pose major challenges to the estimation of uncertainty degree of the
trained models. On the one hand, utilizing current uncertainty modeling methods
may increase memory consumption and are usually time-consuming. On the other
hand, measuring the uncertainty based on model accuracy can also be
problematic, where uncertainty reliability and prediction accuracy are not well
decoupled. In this paper, we propose to model the uncertainty of MDE models
from the perspective of the inherent probability distributions originating from
the depth probability volume and its extensions, and to assess it more fairly
with more comprehensive metrics. By simply introducing additional training
regularization terms, our model, with surprisingly simple formations and
without requiring extra modules or multiple inferences, can provide uncertainty
estimations with state-of-the-art reliability, and can be further improved when
combined with ensemble or sampling methods. A series of experiments demonstrate
the effectiveness of our methods.
Related papers
- Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Estimating Epistemic and Aleatoric Uncertainty with a Single Model [5.871583927216653]
We introduce a new approach to ensembling, hyper-diffusion models (HyperDM)
HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles.
We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting.
arXiv Detail & Related papers (2024-02-05T19:39:52Z) - MonoProb: Self-Supervised Monocular Depth Estimation with Interpretable
Uncertainty [4.260312058817663]
Self-supervised monocular depth estimation methods aim to be used in critical applications such as autonomous vehicles for environment analysis.
We propose MonoProb, a new unsupervised monocular depth estimation method that returns an interpretable uncertainty.
Our experiments highlight enhancements achieved by our method on standard depth and uncertainty metrics.
arXiv Detail & Related papers (2023-11-10T15:55:14Z) - ALUM: Adversarial Data Uncertainty Modeling from Latent Model
Uncertainty Compensation [25.67258563807856]
We propose a novel method called ALUM to handle the model uncertainty and data uncertainty in a unified scheme.
Our proposed ALUM is model-agnostic which can be easily implemented into any existing deep model with little extra overhead.
arXiv Detail & Related papers (2023-03-29T17:24:12Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Diffusion Tensor Estimation with Uncertainty Calibration [6.5085381751712506]
We propose a deep learning method to estimate the diffusion tensor and compute the estimation uncertainty.
Data-dependent uncertainty is computed directly by the network and learned via loss attenuation.
We show that the estimation uncertainties computed by the new method can highlight the model's biases, detect domain shift, and reflect the strength of noise in the measurements.
arXiv Detail & Related papers (2021-11-21T15:58:01Z) - Model Uncertainty Quantification for Reliable Deep Vision Structural
Health Monitoring [2.5126058470073263]
This paper proposes Bayesian inference for deep vision structural health monitoring models.
Uncertainty can be quantified using the Monte Carlo dropout sampling.
Three independent case studies for cracks, local damage identification, and bridge component detection are investigated.
arXiv Detail & Related papers (2020-04-10T17:54:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.