Interval Deep Learning for Uncertainty Quantification in Safety
Applications
- URL: http://arxiv.org/abs/2105.06438v1
- Date: Thu, 13 May 2021 17:21:33 GMT
- Title: Interval Deep Learning for Uncertainty Quantification in Safety
Applications
- Authors: David Betancourt and Rafi Muhanna
- Abstract summary: Current deep neural networks (DNNs) do not have an implicit mechanism to quantify and propagate significant input data uncertainty.
We present a DNN optimized with gradient-based methods capable to quantify input and parameter uncertainty by means of interval analysis.
We show that the Deep Interval Neural Network (DINN) can produce accurate bounded estimates from uncertain input data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep neural networks (DNNs) are becoming more prevalent in important
safety-critical applications, where reliability in the prediction is paramount.
Despite their exceptional prediction capabilities, current DNNs do not have an
implicit mechanism to quantify and propagate significant input data uncertainty
-- which is common in safety-critical applications. In many cases, this
uncertainty is epistemic and can arise from multiple sources, such as lack of
knowledge about the data generating process, imprecision, ignorance, and poor
understanding of physics phenomena. Recent approaches have focused on
quantifying parameter uncertainty, but approaches to end-to-end training of
DNNs with epistemic input data uncertainty are more limited and largely
problem-specific. In this work, we present a DNN optimized with gradient-based
methods capable to quantify input and parameter uncertainty by means of
interval analysis, which we call Deep Interval Neural Network (DINN). We
perform experiments on an air pollution dataset with sensor uncertainty and
show that the DINN can produce accurate bounded estimates from uncertain input
data.
Related papers
- Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset [23.155946032377052]
We introduce a novel instance-wise calibration method based on an energy model.
Our method incorporates energy scores instead of softmax confidence scores, allowing for adaptive consideration of uncertainty.
In experiments, we show that the proposed method consistently maintains robust performance across the spectrum.
arXiv Detail & Related papers (2024-07-17T06:14:55Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Fail-Safe Execution of Deep Learning based Systems through Uncertainty
Monitoring [4.56877715768796]
A fail-safe Deep Learning based System (DLS) is equipped to handle DNN faults by means of a supervisor.
We propose an approach to use DNN uncertainty estimators to implement such a supervisor.
We describe our publicly available tool UNCERTAINTY-WIZARD, which allows transparent estimation of uncertainty for regular tf.keras DNNs.
arXiv Detail & Related papers (2021-02-01T15:22:54Z) - Probabilistic Neighbourhood Component Analysis: Sample Efficient
Uncertainty Estimation in Deep Learning [25.8227937350516]
We show that uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.
We propose a probabilistic generalization of the popular sample-efficient non-parametric kNN approach.
Our approach enables deep kNN to accurately quantify underlying uncertainties in its prediction.
arXiv Detail & Related papers (2020-07-18T21:36:31Z) - A Comparison of Uncertainty Estimation Approaches in Deep Learning
Components for Autonomous Vehicle Applications [0.0]
Key factor for ensuring safety in Autonomous Vehicles (AVs) is to avoid any abnormal behaviors under undesirable and unpredicted circumstances.
Different methods for uncertainty quantification have recently been proposed to measure the inevitable source of errors in data and models.
These methods require a higher computational load, a higher memory footprint, and introduce extra latency, which can be prohibitive in safety-critical applications.
arXiv Detail & Related papers (2020-06-26T18:55:10Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Interval Neural Networks: Uncertainty Scores [11.74565957328407]
We propose a fast, non-Bayesian method for producing uncertainty scores in the output of pre-trained deep neural networks (DNNs)
This interval neural network (INN) has interval valued parameters and propagates its input using interval arithmetic.
In numerical experiments on an image reconstruction task, we demonstrate the practical utility of INNs as a proxy for the prediction error.
arXiv Detail & Related papers (2020-03-25T18:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.