DOI: Divergence-based Out-of-Distribution Indicators via Deep Generative
Models
- URL: http://arxiv.org/abs/2108.05509v1
- Date: Thu, 12 Aug 2021 02:49:54 GMT
- Title: DOI: Divergence-based Out-of-Distribution Indicators via Deep Generative
Models
- Authors: Wenxiao Chen, Xiaohui Nie, Mingliang Li, Dan Pei
- Abstract summary: OoD (out-of-distribution) indicators based on deep generative models are proposed recently and are shown to work well on small datasets.
We conduct the first large collection of benchmarks (containing 92 dataset pairs) for existing OoD indicators and observe that none perform well.
We propose a novel theoretical framework, DOI, for divergence-based Out-of-Distribution indicators (instead of traditional likelihood-based) in deep generative models.
- Score: 6.617664042202313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To ensure robust and reliable classification results, OoD
(out-of-distribution) indicators based on deep generative models are proposed
recently and are shown to work well on small datasets. In this paper, we
conduct the first large collection of benchmarks (containing 92 dataset pairs,
which is 1 order of magnitude larger than previous ones) for existing OoD
indicators and observe that none perform well. We thus advocate that a large
collection of benchmarks is mandatory for evaluating OoD indicators. We propose
a novel theoretical framework, DOI, for divergence-based Out-of-Distribution
indicators (instead of traditional likelihood-based) in deep generative models.
Following this framework, we further propose a simple and effective OoD
detection algorithm: Single-shot Fine-tune. It significantly outperforms past
works by 5~8 in AUROC, and its performance is close to optimal. In recent, the
likelihood criterion is shown to be ineffective in detecting OoD. Single-shot
Fine-tune proposes a novel fine-tune criterion to detect OoD, by whether the
likelihood of the testing sample is improved after fine-tuning a well-trained
model on it. Fine-tune criterion is a clear and easy-following criterion, which
will lead the OoD domain into a new stage.
Related papers
- DSDE: Using Proportion Estimation to Improve Model Selection for Out-of-Distribution Detection [15.238164468992148]
Experimental results on CIFAR10 and CIFAR100 demonstrate the effectiveness of our approach in tackling OoD detection challenges.
We name the proposed approach as DOS-Storey-based Detector Ensemble (DSDE)
arXiv Detail & Related papers (2024-11-03T09:01:36Z) - Improving Out-of-Distribution Generalization of Trajectory Prediction for Autonomous Driving via Polynomial Representations [16.856874154363588]
We present an OoD testing protocol that homogenizes datasets and prediction tasks across two large-scale motion datasets.
With a much smaller model size, training effort, and inference time, we reach near SotA performance for ID testing and significantly improve robustness in OoD testing.
arXiv Detail & Related papers (2024-07-18T12:00:32Z) - SEE-OoD: Supervised Exploration For Enhanced Out-of-Distribution
Detection [11.05254400092658]
We propose a Wasserstein-score-based generative adversarial training scheme to enhance OoD detection accuracy.
Specifically, the generator explores OoD spaces and generates synthetic OoD samples using feedback from the discriminator.
We demonstrate that the proposed method outperforms state-of-the-art techniques on various computer vision datasets.
arXiv Detail & Related papers (2023-10-12T05:20:18Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - Raising the Bar on the Evaluation of Out-of-Distribution Detection [88.70479625837152]
We define 2 categories of OoD data using the subtly different concepts of perceptual/visual and semantic similarity to in-distribution (iD) data.
We propose a GAN based framework for generating OoD samples from each of these 2 categories, given an iD dataset.
We show that a) state-of-the-art OoD detection methods which perform exceedingly well on conventional benchmarks are significantly less robust to our proposed benchmark.
arXiv Detail & Related papers (2022-09-24T08:48:36Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - Enhancing the Generalization for Intent Classification and Out-of-Domain
Detection in SLU [70.44344060176952]
Intent classification is a major task in spoken language understanding (SLU)
Recent works have shown that using extra data and labels can improve the OOD detection performance.
This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection.
arXiv Detail & Related papers (2021-06-28T08:27:38Z) - Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection [72.35532598131176]
We propose an unsupervised method to detect OOD samples using a $k$-NN density estimate.
We leverage a recent insight about label smoothing, which we call the emphLabel Smoothed Embedding Hypothesis
We show that our proposal outperforms many OOD baselines and also provide new finite-sample high-probability statistical results.
arXiv Detail & Related papers (2021-02-09T21:04:44Z) - Entropy Maximization and Meta Classification for Out-Of-Distribution
Detection in Semantic Segmentation [7.305019142196585]
"Out-of-distribution" (OoD) samples are crucial for many applications such as automated driving.
A natural baseline approach to OoD detection is to threshold on the pixel-wise softmax entropy.
We present a two-step procedure that significantly improves that approach.
arXiv Detail & Related papers (2020-12-09T11:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.