PIVEN: A Deep Neural Network for Prediction Intervals with Specific
Value Prediction
- URL: http://arxiv.org/abs/2006.05139v3
- Date: Sun, 20 Jun 2021 13:23:00 GMT
- Title: PIVEN: A Deep Neural Network for Prediction Intervals with Specific
Value Prediction
- Authors: Eli Simhayev, Gilad Katz, Lior Rokach
- Abstract summary: We present PIVEN, a deep neural network for producing both a PI and a value prediction.
Our approach makes no assumptions regarding data distribution within the PI, making its value prediction more effective for various real-world problems.
- Score: 14.635820704895034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Improving the robustness of neural nets in regression tasks is key to their
application in multiple domains. Deep learning-based approaches aim to achieve
this goal either by improving their prediction of specific values (i.e., point
prediction), or by producing prediction intervals (PIs) that quantify
uncertainty. We present PIVEN, a deep neural network for producing both a PI
and a value prediction. Our loss function expresses the value prediction as a
function of the upper and lower bounds, thus ensuring that it falls within the
interval without increasing model complexity. Moreover, our approach makes no
assumptions regarding data distribution within the PI, making its value
prediction more effective for various real-world problems. Experiments and
ablation tests on known benchmarks show that our approach produces tighter
uncertainty bounds than the current state-of-the-art approaches for producing
PIs, while maintaining comparable performance to the state-of-the-art approach
for value-prediction. Additionally, we go beyond previous work and include
large image datasets in our evaluation, where PIVEN is combined with modern
neural nets.
Related papers
- Scalable Subsampling Inference for Deep Neural Networks [0.0]
A non-asymptotic error bound has been developed to measure the performance of the fully connected DNN estimator.
A non-random subsampling technique--scalable subsampling--is applied to construct a subagged' DNN estimator.
The proposed confidence/prediction intervals appear to work well in finite samples.
arXiv Detail & Related papers (2024-05-14T02:11:38Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Dual Accuracy-Quality-Driven Neural Network for Prediction Interval Generation [0.0]
We present a method to learn prediction intervals for regression-based neural networks automatically.
Our main contribution is the design of a novel loss function for the PI-generation network.
Experiments using a synthetic dataset, eight benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage.
arXiv Detail & Related papers (2022-12-13T05:03:16Z) - Confidence-Nets: A Step Towards better Prediction Intervals for
regression Neural Networks on small datasets [0.0]
We propose an ensemble method that attempts to estimate the uncertainty of predictions, increase their accuracy and provide an interval for the expected variation.
The proposed method is tested on various datasets, and a significant improvement in the performance of the neural network model is seen.
arXiv Detail & Related papers (2022-10-31T06:38:40Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - Robust and Adaptive Temporal-Difference Learning Using An Ensemble of
Gaussian Processes [70.80716221080118]
The paper takes a generative perspective on policy evaluation via temporal-difference (TD) learning.
The OS-GPTD approach is developed to estimate the value function for a given policy by observing a sequence of state-reward pairs.
To alleviate the limited expressiveness associated with a single fixed kernel, a weighted ensemble (E) of GP priors is employed to yield an alternative scheme.
arXiv Detail & Related papers (2021-12-01T23:15:09Z) - Uncertainty-Aware Time-to-Event Prediction using Deep Kernel Accelerated
Failure Time Models [11.171712535005357]
We propose Deep Kernel Accelerated Failure Time models for the time-to-event prediction task.
Our model shows better point estimate performance than recurrent neural network based baselines in experiments on two real-world datasets.
arXiv Detail & Related papers (2021-07-26T14:55:02Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.