Towards In-distribution Compatibility in Out-of-distribution Detection
- URL: http://arxiv.org/abs/2208.13433v1
- Date: Mon, 29 Aug 2022 09:06:15 GMT
- Title: Towards In-distribution Compatibility in Out-of-distribution Detection
- Authors: Boxi Wu, Jie Jiang, Haidong Ren, Zifan Du, Wenxiao Wang, Zhifeng Li,
Deng Cai, Xiaofei He, Binbin Lin, Wei Liu
- Abstract summary: We propose a new out-of-distribution detection method by adapting both the top-design of deep models and the loss function.
Our method achieves the state-of-the-art out-of-distribution detection performance but also improves the in-distribution accuracy.
- Score: 30.49191281345763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network, despite its remarkable capability of discriminating
targeted in-distribution samples, shows poor performance on detecting anomalous
out-of-distribution data. To address this defect, state-of-the-art solutions
choose to train deep networks on an auxiliary dataset of outliers. Various
training criteria for these auxiliary outliers are proposed based on heuristic
intuitions. However, we find that these intuitively designed outlier training
criteria can hurt in-distribution learning and eventually lead to inferior
performance. To this end, we identify three causes of the in-distribution
incompatibility: contradictory gradient, false likelihood, and distribution
shift. Based on our new understandings, we propose a new out-of-distribution
detection method by adapting both the top-design of deep models and the loss
function. Our method achieves in-distribution compatibility by pursuing less
interference with the probabilistic characteristic of in-distribution features.
On several benchmarks, our method not only achieves the state-of-the-art
out-of-distribution detection performance but also improves the in-distribution
accuracy.
Related papers
- Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach [14.958884168060097]
We present a novel approach for test-time adaptation via online self-training.
Our approach combines concepts in betting martingales and online learning to form a detection tool capable of reacting to distribution shifts.
Experimental results demonstrate that our approach improves test-time accuracy under distribution shifts while maintaining accuracy and calibration in their absence.
arXiv Detail & Related papers (2024-08-14T12:40:57Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Distribution Shift Inversion for Out-of-Distribution Prediction [57.22301285120695]
We propose a portable Distribution Shift Inversion algorithm for Out-of-Distribution (OoD) prediction.
We show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.
arXiv Detail & Related papers (2023-06-14T08:00:49Z) - Out-of-distribution Detection by Cross-class Vicinity Distribution of
In-distribution Data [36.66825830101456]
Deep neural networks for image classification only learn to map in-distribution inputs to their corresponding ground truth labels in training.
This results from the assumption that all samples are independent and identically distributed.
A textitCross-class Vicinity Distribution is introduced by assuming that an out-of-distribution sample generated by mixing multiple in-distribution samples does not share the same classes of its constituents.
arXiv Detail & Related papers (2022-06-19T12:03:33Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.