Exploring Uncertainty in Deep Learning for Construction of Prediction
Intervals
- URL: http://arxiv.org/abs/2104.12953v1
- Date: Tue, 27 Apr 2021 02:58:20 GMT
- Title: Exploring Uncertainty in Deep Learning for Construction of Prediction
Intervals
- Authors: Yuandu Lai, Yucheng Shi, Yahong Han, Yunfeng Shao, Meiyu Qi, Bingshuai
Li
- Abstract summary: We explore the uncertainty in deep learning to construct prediction intervals.
We design a special loss function, which enables us to learn uncertainty without uncertainty label.
Our method correlates the construction of prediction intervals with the uncertainty estimation.
- Score: 27.569681578957645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved impressive performance on many tasks in recent
years. However, it has been found that it is still not enough for deep neural
networks to provide only point estimates. For high-risk tasks, we need to
assess the reliability of the model predictions. This requires us to quantify
the uncertainty of model prediction and construct prediction intervals. In this
paper, We explore the uncertainty in deep learning to construct the prediction
intervals. In general, We comprehensively consider two categories of
uncertainties: aleatory uncertainty and epistemic uncertainty. We design a
special loss function, which enables us to learn uncertainty without
uncertainty label. We only need to supervise the learning of regression task.
We learn the aleatory uncertainty implicitly from the loss function. And that
epistemic uncertainty is accounted for in ensembled form. Our method correlates
the construction of prediction intervals with the uncertainty estimation.
Impressive results on some publicly available datasets show that the
performance of our method is competitive with other state-of-the-art methods.
Related papers
- Fast Uncertainty Estimates in Deep Learning Interatomic Potentials [0.0]
We propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble.
We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles.
arXiv Detail & Related papers (2022-11-17T20:13:39Z) - Comparison of Uncertainty Quantification with Deep Learning in Time
Series Regression [7.6146285961466]
In this paper, different uncertainty estimation methods are compared to forecast meteorological time series data.
Results show how each uncertainty estimation method performs on the forecasting task.
arXiv Detail & Related papers (2022-11-11T14:29:13Z) - On the Difficulty of Epistemic Uncertainty Quantification in Machine
Learning: The Case of Direct Uncertainty Estimation through Loss Minimisation [8.298716599039501]
Uncertainty quantification has received increasing attention in machine learning.
The latter refers to the learner's (lack of) knowledge and appears to be especially difficult to measure and quantify.
We show that loss minimisation does not work for second-order predictors.
arXiv Detail & Related papers (2022-03-11T17:26:05Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic
Forgetting [29.196246255389664]
One of the major limitations of deep learning models is that they face catastrophic forgetting in an incremental learning scenario.
We consider a Bayesian formulation to obtain the data and model uncertainties.
We also incorporate self-attention framework to address the incremental learning problem.
arXiv Detail & Related papers (2021-02-03T06:54:52Z) - Double-Uncertainty Weighted Method for Semi-supervised Learning [32.484750353853954]
We propose a double-uncertainty weighted method for semi-supervised segmentation based on the teacher-student model.
We train the teacher model using Bayesian deep learning to obtain double-uncertainty, i.e. segmentation uncertainty and feature uncertainty.
Our method outperforms the state-of-the-art uncertainty-based semi-supervised methods on two public medical datasets.
arXiv Detail & Related papers (2020-10-19T08:20:18Z) - Temporal Difference Uncertainties as a Signal for Exploration [76.6341354269013]
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy.
In this paper, we highlight that value estimates are easily biased and temporally inconsistent.
We propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors.
arXiv Detail & Related papers (2020-10-05T18:11:22Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.