Fail-Safe Execution of Deep Learning based Systems through Uncertainty
Monitoring
- URL: http://arxiv.org/abs/2102.00902v1
- Date: Mon, 1 Feb 2021 15:22:54 GMT
- Title: Fail-Safe Execution of Deep Learning based Systems through Uncertainty
Monitoring
- Authors: Michael Weiss and Paolo Tonella
- Abstract summary: A fail-safe Deep Learning based System (DLS) is equipped to handle DNN faults by means of a supervisor.
We propose an approach to use DNN uncertainty estimators to implement such a supervisor.
We describe our publicly available tool UNCERTAINTY-WIZARD, which allows transparent estimation of uncertainty for regular tf.keras DNNs.
- Score: 4.56877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern software systems rely on Deep Neural Networks (DNN) when processing
complex, unstructured inputs, such as images, videos, natural language texts or
audio signals. Provided the intractably large size of such input spaces, the
intrinsic limitations of learning algorithms, and the ambiguity about the
expected predictions for some of the inputs, not only there is no guarantee
that DNN's predictions are always correct, but rather developers must safely
assume a low, though not negligible, error probability. A fail-safe Deep
Learning based System (DLS) is one equipped to handle DNN faults by means of a
supervisor, capable of recognizing predictions that should not be trusted and
that should activate a healing procedure bringing the DLS to a safe state. In
this paper, we propose an approach to use DNN uncertainty estimators to
implement such a supervisor. We first discuss the advantages and disadvantages
of existing approaches to measure uncertainty for DNNs and propose novel
metrics for the empirical assessment of the supervisor that rely on such
approaches. We then describe our publicly available tool UNCERTAINTY-WIZARD,
which allows transparent estimation of uncertainty for regular tf.keras DNNs.
Lastly, we discuss a large-scale study conducted on four different subjects to
empirically validate the approach, reporting the lessons-learned as guidance
for software engineers who intend to monitor uncertainty for fail-safe
execution of DLS.
Related papers
- Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset [23.155946032377052]
We introduce a novel instance-wise calibration method based on an energy model.
Our method incorporates energy scores instead of softmax confidence scores, allowing for adaptive consideration of uncertainty.
In experiments, we show that the proposed method consistently maintains robust performance across the spectrum.
arXiv Detail & Related papers (2024-07-17T06:14:55Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Uncertainty Quantification for Deep Neural Networks: An Empirical
Comparison and Usage Guidelines [4.987581730476023]
Deep Neural Networks (DNN) are increasingly used as components of larger software systems that need to process complex data.
Deep Learning based System (DLS) that implement a supervisor by means of uncertainty estimation.
arXiv Detail & Related papers (2022-12-14T09:12:30Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Interval Deep Learning for Uncertainty Quantification in Safety
Applications [0.0]
Current deep neural networks (DNNs) do not have an implicit mechanism to quantify and propagate significant input data uncertainty.
We present a DNN optimized with gradient-based methods capable to quantify input and parameter uncertainty by means of interval analysis.
We show that the Deep Interval Neural Network (DINN) can produce accurate bounded estimates from uncertain input data.
arXiv Detail & Related papers (2021-05-13T17:21:33Z) - PAC Confidence Predictions for Deep Neural Network Classifiers [28.61937254015157]
Key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty.
We propose an algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees.
arXiv Detail & Related papers (2020-11-02T04:09:17Z) - A Comparison of Uncertainty Estimation Approaches in Deep Learning
Components for Autonomous Vehicle Applications [0.0]
Key factor for ensuring safety in Autonomous Vehicles (AVs) is to avoid any abnormal behaviors under undesirable and unpredicted circumstances.
Different methods for uncertainty quantification have recently been proposed to measure the inevitable source of errors in data and models.
These methods require a higher computational load, a higher memory footprint, and introduce extra latency, which can be prohibitive in safety-critical applications.
arXiv Detail & Related papers (2020-06-26T18:55:10Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.