Uncertainty Quantification for Deep Neural Networks: An Empirical
Comparison and Usage Guidelines
- URL: http://arxiv.org/abs/2212.07118v1
- Date: Wed, 14 Dec 2022 09:12:30 GMT
- Title: Uncertainty Quantification for Deep Neural Networks: An Empirical
Comparison and Usage Guidelines
- Authors: Michael Weiss and Paolo Tonella
- Abstract summary: Deep Neural Networks (DNN) are increasingly used as components of larger software systems that need to process complex data.
Deep Learning based System (DLS) that implement a supervisor by means of uncertainty estimation.
- Score: 4.987581730476023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNN) are increasingly used as components of larger
software systems that need to process complex data, such as images, written
texts, audio/video signals. DNN predictions cannot be assumed to be always
correct for several reasons, among which the huge input space that is dealt
with, the ambiguity of some inputs data, as well as the intrinsic properties of
learning algorithms, which can provide only statistical warranties. Hence,
developers have to cope with some residual error probability. An architectural
pattern commonly adopted to manage failure-prone components is the supervisor,
an additional component that can estimate the reliability of the predictions
made by untrusted (e.g., DNN) components and can activate an automated healing
procedure when these are likely to fail, ensuring that the Deep Learning based
System (DLS) does not cause damages, despite its main functionality being
suspended.
In this paper, we consider DLS that implement a supervisor by means of
uncertainty estimation. After overviewing the main approaches to uncertainty
estimation and discussing their pros and cons, we motivate the need for a
specific empirical assessment method that can deal with the experimental
setting in which supervisors are used, where the accuracy of the DNN matters
only as long as the supervisor lets the DLS continue to operate. Then we
present a large empirical study conducted to compare the alternative approaches
to uncertainty estimation. We distilled a set of guidelines for developers that
are useful to incorporate a supervisor based on uncertainty monitoring into a
DLS.
Related papers
- The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Interval Deep Learning for Uncertainty Quantification in Safety
Applications [0.0]
Current deep neural networks (DNNs) do not have an implicit mechanism to quantify and propagate significant input data uncertainty.
We present a DNN optimized with gradient-based methods capable to quantify input and parameter uncertainty by means of interval analysis.
We show that the Deep Interval Neural Network (DINN) can produce accurate bounded estimates from uncertain input data.
arXiv Detail & Related papers (2021-05-13T17:21:33Z) - Uncertainty-aware Remaining Useful Life predictor [57.74855412811814]
Remaining Useful Life (RUL) estimation is the problem of inferring how long a certain industrial asset can be expected to operate.
In this work, we consider Deep Gaussian Processes (DGPs) as possible solutions to the aforementioned limitations.
The performance of the algorithms is evaluated on the N-CMAPSS dataset from NASA for aircraft engines.
arXiv Detail & Related papers (2021-04-08T08:50:44Z) - Sketching Curvature for Efficient Out-of-Distribution Detection for Deep
Neural Networks [32.629801680158685]
Sketching Curvature of OoD Detection (SCOD) is an architecture-agnostic framework for equipping trained Deep Neural Networks with task-relevant uncertainty estimates.
We demonstrate that SCOD achieves comparable or better OoD detection performance with lower computational burden relative to existing baselines.
arXiv Detail & Related papers (2021-02-24T21:34:40Z) - Fail-Safe Execution of Deep Learning based Systems through Uncertainty
Monitoring [4.56877715768796]
A fail-safe Deep Learning based System (DLS) is equipped to handle DNN faults by means of a supervisor.
We propose an approach to use DNN uncertainty estimators to implement such a supervisor.
We describe our publicly available tool UNCERTAINTY-WIZARD, which allows transparent estimation of uncertainty for regular tf.keras DNNs.
arXiv Detail & Related papers (2021-02-01T15:22:54Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - A Comparison of Uncertainty Estimation Approaches in Deep Learning
Components for Autonomous Vehicle Applications [0.0]
Key factor for ensuring safety in Autonomous Vehicles (AVs) is to avoid any abnormal behaviors under undesirable and unpredicted circumstances.
Different methods for uncertainty quantification have recently been proposed to measure the inevitable source of errors in data and models.
These methods require a higher computational load, a higher memory footprint, and introduce extra latency, which can be prohibitive in safety-critical applications.
arXiv Detail & Related papers (2020-06-26T18:55:10Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z) - Assurance Monitoring of Cyber-Physical Systems with Machine Learning
Components [2.1320960069210484]
We investigate how to use the conformal prediction framework for assurance monitoring of Cyber-Physical Systems.
In order to handle high-dimensional inputs in real-time, we compute nonconformity scores using embedding representations of the learned models.
By leveraging conformal prediction, the approach provides well-calibrated confidence and can allow monitoring that ensures a bounded small error rate.
arXiv Detail & Related papers (2020-01-14T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.