PI3NN: Prediction intervals from three independently trained neural
networks
- URL: http://arxiv.org/abs/2108.02327v1
- Date: Thu, 5 Aug 2021 00:55:20 GMT
- Title: PI3NN: Prediction intervals from three independently trained neural
networks
- Authors: Siyan Liu, Pei Zhang, Dan Lu, Guannan Zhang
- Abstract summary: We propose a novel prediction interval method to learn prediction mean values, lower and upper bounds of prediction intervals from three independently trained neural networks.
Our method requires no distributional assumption on data, does not introduce unusual hyper parameters to either the neural network models or the loss function.
Numerical experiments on benchmark regression problems show that our method outperforms the state-of-the-art methods with respect to predictive uncertainty quality, robustness, and identification of out-of-distribution samples.
- Score: 4.714371905733244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel prediction interval method to learn prediction mean
values, lower and upper bounds of prediction intervals from three independently
trained neural networks only using the standard mean squared error (MSE) loss,
for uncertainty quantification in regression tasks. Our method requires no
distributional assumption on data, does not introduce unusual hyperparameters
to either the neural network models or the loss function. Moreover, our method
can effectively identify out-of-distribution samples and reasonably quantify
their uncertainty. Numerical experiments on benchmark regression problems show
that our method outperforms the state-of-the-art methods with respect to
predictive uncertainty quality, robustness, and identification of
out-of-distribution samples.
Related papers
- Uncertainty Quantification for Deep Learning [0.0]
A complete and statistically consistent uncertainty quantification for deep learning is provided.
We demonstrate how each uncertainty source can be systematically quantified.
We also introduce a fast and practical way to incorporate and combine all sources of errors for the first time.
arXiv Detail & Related papers (2024-05-31T00:20:19Z) - A General Framework for Uncertainty Quantification via Neural SDE-RNN [0.3314882635954751]
Uncertainty quantification is a critical yet unsolved challenge for deep learning.
We propose a novel framework based on the principles of recurrent neural networks and neural differential equations for reconciling irregularly sampled measurements.
Our experiments on the IEEE 37 bus test system reveal that our framework can outperform state-of-the-art uncertainty quantification approaches for time-series data imputations.
arXiv Detail & Related papers (2023-06-01T22:59:45Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Can a single neuron learn predictive uncertainty? [0.0]
We introduce a novel non-parametric quantile estimation method for continuous random variables based on the simplest neural network architecture with one degree of freedom: a single neuron.
In real-world applications, the method can be used to quantify predictive uncertainty under the split conformal prediction setting.
arXiv Detail & Related papers (2021-06-07T15:12:47Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Uncertainty Estimation and Calibration with Finite-State Probabilistic
RNNs [29.84563789289183]
Uncertainty quantification is crucial for building reliable and trustable machine learning systems.
We propose to estimate uncertainty in recurrent neural networks (RNNs) via discrete state transitions over recurrent timesteps.
The uncertainty of the model can be quantified by running a prediction several times, each time sampling from the recurrent state transition distribution.
arXiv Detail & Related papers (2020-11-24T10:35:28Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.