Prediction Intervals: Split Normal Mixture from Quality-Driven Deep
Ensembles
- URL: http://arxiv.org/abs/2007.09670v1
- Date: Sun, 19 Jul 2020 13:46:34 GMT
- Title: Prediction Intervals: Split Normal Mixture from Quality-Driven Deep
Ensembles
- Authors: T\'arik S. Salem, Helge Langseth, Heri Ramampiaro
- Abstract summary: We present a method for generating prediction intervals along with point estimates from an ensemble of neural networks.
We propose a multi-objective loss function fusing quality measures related to prediction intervals and point estimates, and a penalty function, which enforces semantic integrity of the results.
Our results show that both our quality-driven loss function and our aggregation method contribute to well-calibrated prediction intervals and point estimates.
- Score: 4.521131595149397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prediction intervals are a machine- and human-interpretable way to represent
predictive uncertainty in a regression analysis. In this paper, we present a
method for generating prediction intervals along with point estimates from an
ensemble of neural networks. We propose a multi-objective loss function fusing
quality measures related to prediction intervals and point estimates, and a
penalty function, which enforces semantic integrity of the results and
stabilizes the training process of the neural networks. The ensembled
prediction intervals are aggregated as a split normal mixture accounting for
possible multimodality and asymmetricity of the posterior predictive
distribution, and resulting in prediction intervals that capture aleatoric and
epistemic uncertainty. Our results show that both our quality-driven loss
function and our aggregation method contribute to well-calibrated prediction
intervals and point estimates.
Related papers
- Semiparametric conformal prediction [79.6147286161434]
Risk-sensitive applications require well-calibrated prediction sets over multiple, potentially correlated target variables.
We treat the scores as random vectors and aim to construct the prediction set accounting for their joint correlation structure.
We report desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - Adaptive Neural Network Ensemble Using Frequency Distribution [3.42658286826597]
Neural network (NN) ensembles can reduce large prediction variance of NN and improve prediction accuracy.
For highly nonlinear problems with insufficient data set, the prediction accuracy of NN models becomes unstable.
This study proposes a frequency distribution-based ensemble that identifies core prediction values, which are expected to be concentrated near the true prediction value.
arXiv Detail & Related papers (2022-10-19T08:05:35Z) - Conformal Prediction Intervals for Markov Decision Process Trajectories [10.68332392039368]
This paper provides conformal prediction intervals over the future behavior of an autonomous system executing a fixed control policy on a Markov Decision Process (MDP)
The method is illustrated on MDPs for invasive species management and StarCraft2 battles.
arXiv Detail & Related papers (2022-06-10T03:43:53Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Interpretable Machines: Constructing Valid Prediction Intervals with
Random Forests [0.0]
An important issue when using Machine Learning algorithms in recent research is the lack of interpretability.
A contribution to this gap for the Random Forest Regression Learner is presented here.
Several parametric and non-parametric prediction intervals are provided for Random Forest point predictions.
A thorough investigation through Monte-Carlo simulation is conducted evaluating the performance of the proposed methods.
arXiv Detail & Related papers (2021-03-09T23:05:55Z) - Conformal Prediction Intervals for Neural Networks Using Cross
Validation [0.0]
Neural networks are among the most powerful nonlinear models used to address supervised learning problems.
We propose the $k$-fold prediction interval method to construct prediction intervals for neural networks based on $k$-fold cross validation.
arXiv Detail & Related papers (2020-06-30T16:23:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.