Can a single neuron learn predictive uncertainty?
- URL: http://arxiv.org/abs/2106.03702v3
- Date: Thu, 20 Apr 2023 12:59:45 GMT
- Title: Can a single neuron learn predictive uncertainty?
- Authors: Edgardo Solano-Carrillo
- Abstract summary: We introduce a novel non-parametric quantile estimation method for continuous random variables based on the simplest neural network architecture with one degree of freedom: a single neuron.
In real-world applications, the method can be used to quantify predictive uncertainty under the split conformal prediction setting.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Uncertainty estimation methods using deep learning approaches strive against
separating how uncertain the state of the world manifests to us via measurement
(objective end) from the way this gets scrambled with the model specification
and training procedure used to predict such state (subjective means) -- e.g.,
number of neurons, depth, connections, priors (if the model is bayesian),
weight initialization, etc. This poses the question of the extent to which one
can eliminate the degrees of freedom associated with these specifications and
still being able to capture the objective end. Here, a novel non-parametric
quantile estimation method for continuous random variables is introduced, based
on the simplest neural network architecture with one degree of freedom: a
single neuron. Its advantage is first shown in synthetic experiments comparing
with the quantile estimation achieved from ranking the order statistics
(specifically for small sample size) and with quantile regression. In
real-world applications, the method can be used to quantify predictive
uncertainty under the split conformal prediction setting, whereby prediction
intervals are estimated from the residuals of a pre-trained model on a held-out
validation set and then used to quantify the uncertainty in future predictions
-- the single neuron used here as a structureless ``thermometer'' that measures
how uncertain the pre-trained model is. Benchmarking regression and
classification experiments demonstrate that the method is competitive in
quality and coverage with state-of-the-art solutions, with the added benefit of
being more computationally efficient.
Related papers
- Awareness of uncertainty in classification using a multivariate model and multi-views [1.3048920509133808]
The proposed model regularizes uncertain predictions, and trains to calculate both the predictions and their uncertainty estimations.
Given the multi-view predictions together with their uncertainties and confidences, we proposed several methods to calculate final predictions.
The proposed methodology was tested using CIFAR-10 dataset with clean and noisy labels.
arXiv Detail & Related papers (2024-04-16T06:40:51Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Last layer state space model for representation learning and uncertainty
quantification [0.0]
We propose to decompose a classification or regression task in two steps: a representation learning stage to learn low-dimensional states, and a state space model for uncertainty estimation.
We demonstrate how predictive distributions can be estimated on top of an existing and trained neural network, by adding a state space-based last layer.
Our model accounts for the noisy data structure, due to unknown or unavailable variables, and is able to provide confidence intervals on predictions.
arXiv Detail & Related papers (2023-07-04T08:37:37Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.