Estimating Uncertainty Intervals from Collaborating Networks
- URL: http://arxiv.org/abs/2002.05212v3
- Date: Fri, 12 Nov 2021 19:21:53 GMT
- Title: Estimating Uncertainty Intervals from Collaborating Networks
- Authors: Tianhui Zhou, Yitong Li, Yuan Wu, David Carlson
- Abstract summary: We propose a novel method to capture predictive distributions in regression by defining two neural networks with two distinct loss functions.
Specifically, one network approximates the cumulative distribution function, and the second network approximates its inverse.
We benchmark CN against several common approaches on two synthetic and six real-world datasets, including forecasting A1c values in diabetic patients from electronic health records.
- Score: 15.467208581231848
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Effective decision making requires understanding the uncertainty inherent in
a prediction. In regression, this uncertainty can be estimated by a variety of
methods; however, many of these methods are laborious to tune, generate
overconfident uncertainty intervals, or lack sharpness (give imprecise
intervals). We address these challenges by proposing a novel method to capture
predictive distributions in regression by defining two neural networks with two
distinct loss functions. Specifically, one network approximates the cumulative
distribution function, and the second network approximates its inverse. We
refer to this method as Collaborating Networks (CN). Theoretical analysis
demonstrates that a fixed point of the optimization is at the idealized
solution, and that the method is asymptotically consistent to the ground truth
distribution. Empirically, learning is straightforward and robust. We benchmark
CN against several common approaches on two synthetic and six real-world
datasets, including forecasting A1c values in diabetic patients from electronic
health records, where uncertainty is critical. In the synthetic data, the
proposed approach essentially matches ground truth. In the real-world datasets,
CN improves results on many performance metrics, including log-likelihood
estimates, mean absolute errors, coverage estimates, and prediction interval
widths.
Related papers
- SEF: A Method for Computing Prediction Intervals by Shifting the Error Function in Neural Networks [0.0]
SEF (Shifting the Error Function) method presented in this paper is a new method that belongs to this category of methods.
The proposed approach involves training a single neural network three times, thus generating an estimate along with the corresponding upper and lower bounds for a given problem.
This innovative process effectively produces PIs, resulting in a robust and efficient technique for uncertainty quantification.
arXiv Detail & Related papers (2024-09-08T19:46:45Z) - Distributional Shift-Aware Off-Policy Interval Estimation: A Unified
Error Quantification Framework [8.572441599469597]
We study high-confidence off-policy evaluation in the context of infinite-horizon Markov decision processes.
The objective is to establish a confidence interval (CI) for the target policy value using only offline data pre-collected from unknown behavior policies.
We show that our algorithm is sample-efficient, error-robust, and provably convergent even in non-linear function approximation settings.
arXiv Detail & Related papers (2023-09-23T06:35:44Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Towards Trustworthy Predictions from Deep Neural Networks with Fast
Adversarial Calibration [2.8935588665357077]
We propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift.
We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions.
arXiv Detail & Related papers (2020-12-20T13:39:29Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.