Approaching Neural Network Uncertainty Realism
- URL: http://arxiv.org/abs/2101.02974v1
- Date: Fri, 8 Jan 2021 11:56:12 GMT
- Title: Approaching Neural Network Uncertainty Realism
- Authors: Joachim Sicking, Alexander Kister, Matthias Fahrland, Stefan Eickeler,
Fabian H\"uger, Stefan R\"uping, Peter Schlicht, Tim Wirtz
- Abstract summary: Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
- Score: 53.308409014122816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical models are inherently uncertain. Quantifying or at least
upper-bounding their uncertainties is vital for safety-critical systems such as
autonomous vehicles. While standard neural networks do not report this
information, several approaches exist to integrate uncertainty estimates into
them. Assessing the quality of these uncertainty estimates is not
straightforward, as no direct ground truth labels are available. Instead,
implicit statistical assessments are required. For regression, we propose to
evaluate uncertainty realism -- a strict quality criterion -- with a
Mahalanobis distance-based statistical test. An empirical evaluation reveals
the need for uncertainty measures that are appropriate to upper-bound
heavy-tailed empirical errors. Alongside, we transfer the variational U-Net
classification architecture to standard supervised image-to-image tasks. We
adopt it to the automotive domain and show that it significantly improves
uncertainty realism compared to a plain encoder-decoder model.
Related papers
- Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - URL: A Representation Learning Benchmark for Transferable Uncertainty
Estimates [26.453013634439802]
We propose the Uncertainty-aware Representation Learning benchmark.
It measures the zero-shot transferability of the uncertainty estimate using a novel metric.
We find that approaches that focus on the uncertainty of the representation itself or estimate the prediction risk directly outperform those that are based on the probabilities of upstream classes.
arXiv Detail & Related papers (2023-07-07T19:34:04Z) - Evaluating AI systems under uncertain ground truth: a case study in
dermatology [44.80772162289557]
We propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation.
We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses.
arXiv Detail & Related papers (2023-07-05T10:33:45Z) - Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty
Optimization [11.456242421204298]
In a well-calibrated model, uncertainty estimates should perfectly correlate with model error.
We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error.
We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.
arXiv Detail & Related papers (2022-12-09T12:33:26Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Uncertainty-Aware Reliable Text Classification [21.517852608625127]
Deep neural networks have significantly contributed to the success in predictive accuracy for classification tasks.
They tend to make over-confident predictions in real-world settings, where domain shifting and out-of-distribution examples exist.
We propose an inexpensive framework that adopts both auxiliary outliers and pseudo off-manifold samples to train the model with prior knowledge of a certain class.
arXiv Detail & Related papers (2021-07-15T04:39:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.