Out-of-distribution detection for regression tasks: parameter versus
predictor entropy
- URL: http://arxiv.org/abs/2010.12995v2
- Date: Mon, 11 Sep 2023 20:13:57 GMT
- Title: Out-of-distribution detection for regression tasks: parameter versus
predictor entropy
- Authors: Yann Pequignot, Mathieu Alain, Patrick Dallaire, Alireza
Yeganehparast, Pascal Germain, Jos\'ee Desharnais and Fran\c{c}ois Laviolette
- Abstract summary: It is crucial to detect when an instance lies downright too far from the training samples for the machine learning model to be trusted.
For neural networks, one approach to this task consists of learning a diversity of predictors that all can explain the training data.
We propose a new way of estimating the entropy of a distribution on predictors based on nearest neighbors in function space.
- Score: 2.026281591452464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is crucial to detect when an instance lies downright too far from the
training samples for the machine learning model to be trusted, a challenge
known as out-of-distribution (OOD) detection. For neural networks, one approach
to this task consists of learning a diversity of predictors that all can
explain the training data. This information can be used to estimate the
epistemic uncertainty at a given newly observed instance in terms of a measure
of the disagreement of the predictions. Evaluation and certification of the
ability of a method to detect OOD require specifying instances which are likely
to occur in deployment yet on which no prediction is available. Focusing on
regression tasks, we choose a simple yet insightful model for this OOD
distribution and conduct an empirical evaluation of the ability of various
methods to discriminate OOD samples from the data. Moreover, we exhibit
evidence that a diversity of parameters may fail to translate to a diversity of
predictors. Based on the choice of an OOD distribution, we propose a new way of
estimating the entropy of a distribution on predictors based on nearest
neighbors in function space. This leads to a variational objective which,
combined with the family of distributions given by a generative neural network,
systematically produces a diversity of predictors that provides a robust way to
detect OOD samples.
Related papers
- Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - Addressing Randomness in Evaluation Protocols for Out-of-Distribution
Detection [1.8047694351309207]
Deep Neural Networks for classification behave unpredictably when confronted with inputs not stemming from the training distribution.
We show that current protocols may fail to provide reliable estimates of the expected performance of OOD methods.
We propose to estimate the performance of OOD methods using a Monte Carlo approach that addresses the randomness.
arXiv Detail & Related papers (2022-03-01T12:06:44Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Personalized Trajectory Prediction via Distribution Discrimination [78.69458579657189]
Trarimiy prediction is confronted with the dilemma to capture the multi-modal nature of future dynamics.
We present a distribution discrimination (DisDis) method to predict personalized motion patterns.
Our method can be integrated with existing multi-modal predictive models as a plug-and-play module.
arXiv Detail & Related papers (2021-07-29T17:42:12Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Posterior Network: Uncertainty Estimation without OOD Samples via
Density-Based Pseudo-Counts [33.45069308137142]
Posterior Network (PostNet) predicts an individual closed-form posterior distribution over predicted probabilites for any input sample.
PostNet achieves state-of-the art results in OOD detection and in uncertainty calibration under dataset shifts.
arXiv Detail & Related papers (2020-06-16T15:16:32Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.