pseudo-Bayesian Neural Networks for detecting Out of Distribution Inputs
- URL: http://arxiv.org/abs/2102.01336v1
- Date: Tue, 2 Feb 2021 06:23:04 GMT
- Title: pseudo-Bayesian Neural Networks for detecting Out of Distribution Inputs
- Authors: Gagandeep Singh, Deepak Mishra
- Abstract summary: We propose pseudo-BNNs where instead of learning distributions over weights, we use point estimates and perturb weights at the time of inference.
Overall, this combination results in a principled technique to detect OOD samples at the time of inference.
- Score: 12.429095025814345
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Conventional Bayesian Neural Networks (BNNs) are known to be capable of
providing multiple outputs for a single input, the variations in which can be
utilised to detect Out of Distribution (OOD) inputs. BNNs are difficult to
train due to their sensitivity towards the choice of priors. To alleviate this
issue, we propose pseudo-BNNs where instead of learning distributions over
weights, we use point estimates and perturb weights at the time of inference.
We modify the cost function of conventional BNNs and use it to learn parameters
for the purpose of injecting right amount of random perturbations to each of
the weights of a neural network with point estimate. In order to effectively
segregate OOD inputs from In Distribution (ID) inputs using multiple outputs,
we further propose two measures, derived from the index of dispersion and
entropy of probability distributions, and combine them with the proposed
pseudo-BNNs. Overall, this combination results in a principled technique to
detect OOD samples at the time of inference. We evaluate our technique on a
wide variety of neural network architectures and image classification datasets.
We observe that our method achieves state of the art results and beats the
related previous work on various metrics such as FPR at 95% TPR, AUROC, AUPR
and Detection Error by just using 2 to 5 samples of weights per input.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - FlowCon: Out-of-Distribution Detection using Flow-Based Contrastive Learning [0.0]
We introduce textitFlowCon, a new density-based OOD detection technique.
Our main innovation lies in efficiently combining the properties of normalizing flow with supervised contrastive learning.
Empirical evaluation shows the enhanced performance of our method across common vision datasets.
arXiv Detail & Related papers (2024-07-03T20:33:56Z) - Model-free Test Time Adaptation for Out-Of-Distribution Detection [62.49795078366206]
We propose a Non-Parametric Test Time textbfAdaptation framework for textbfDistribution textbfDetection (abbr)
abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions.
We demonstrate the effectiveness of abbr through comprehensive experiments on multiple OOD detection benchmarks.
arXiv Detail & Related papers (2023-11-28T02:00:47Z) - Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is essential in identifying test samples that deviate from the in-distribution (ID) data upon which a standard network is trained.
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - Out-of-distribution Object Detection through Bayesian Uncertainty
Estimation [10.985423935142832]
We propose a novel, intuitive, and scalable probabilistic object detection method for OOD detection.
Our method is able to distinguish between in-distribution (ID) data and OOD data via weight parameter sampling from proposed Gaussian distributions.
We demonstrate that our Bayesian object detector can achieve satisfactory OOD identification performance by reducing the FPR95 score by up to 8.19% and increasing the AUROC score by up to 13.94% when trained on BDD100k and VOC datasets.
arXiv Detail & Related papers (2023-10-29T19:10:52Z) - WeShort: Out-of-distribution Detection With Weak Shortcut structure [0.0]
We propose a simple and effective post-hoc technique, WeShort, to reduce the overconfidence of neural networks on OOD data.
Our method is compatible with different OOD detection scores and can generalize well to different architectures of networks.
arXiv Detail & Related papers (2022-06-23T07:59:10Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.