VOS: Learning What You Don't Know by Virtual Outlier Synthesis
- URL: http://arxiv.org/abs/2202.01197v3
- Date: Fri, 4 Feb 2022 17:41:29 GMT
- Title: VOS: Learning What You Don't Know by Virtual Outlier Synthesis
- Authors: Xuefeng Du, Zhaoning Wang, Mu Cai, Yixuan Li
- Abstract summary: Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks.
Previous approaches rely on real outlier datasets for model regularization.
We present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers.
- Score: 23.67449949146439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection has received much attention lately due to
its importance in the safe deployment of neural networks. One of the key
challenges is that models lack supervision signals from unknown data, and as a
result, can produce overconfident predictions on OOD data. Previous approaches
rely on real outlier datasets for model regularization, which can be costly and
sometimes infeasible to obtain in practice. In this paper, we present VOS, a
novel framework for OOD detection by adaptively synthesizing virtual outliers
that can meaningfully regularize the model's decision boundary during training.
Specifically, VOS samples virtual outliers from the low-likelihood region of
the class-conditional distribution estimated in the feature space. Alongside,
we introduce a novel unknown-aware training objective, which contrastively
shapes the uncertainty space between the ID data and synthesized outlier data.
VOS achieves state-of-the-art performance on both object detection and image
classification models, reducing the FPR95 by up to 7.87% compared to the
previous best method. Code is available at
https://github.com/deeplearning-wisc/vos.
Related papers
- Non-Linear Outlier Synthesis for Out-of-Distribution Detection [5.019613806273252]
We present NCIS, which enhances the quality of synthetic outliers by operating directly in the diffusion's model embedding space.
We demonstrate that these improvements yield new state-of-the-art OOD detection results on standard ImageNet100 and CIFAR100 benchmarks.
arXiv Detail & Related papers (2024-11-20T09:47:29Z) - Going Beyond Conventional OOD Detection [0.0]
Out-of-distribution (OOD) detection is critical to ensure the safe deployment of deep learning models in critical applications.
We present a unified Approach to Spurimatious, fine-grained, and Conventional OOD Detection (ASCOOD)
Our approach effectively mitigates the impact of spurious correlations and encourages capturing fine-grained attributes.
arXiv Detail & Related papers (2024-11-16T13:04:52Z) - Forte : Finding Outliers with Representation Typicality Estimation [0.14061979259370275]
Generative models can now produce synthetic data which is virtually indistinguishable from the real data used to train it.
Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors.
We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation.
arXiv Detail & Related papers (2024-10-02T08:26:37Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Non-Parametric Outlier Synthesis [35.20765580915213]
Out-of-distribution (OOD) detection is indispensable for safely deploying machine learning models in the wild.
We propose a novel framework, Non-Parametric Outlier Synthesis (NPOS), which generates artificial OOD training data.
We show that our synthesis approach can be mathematically interpreted as a rejection sampling framework.
arXiv Detail & Related papers (2023-03-06T08:51:00Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Out-of-Distribution Detection with Hilbert-Schmidt Independence
Optimization [114.43504951058796]
Outlier detection tasks have been playing a critical role in AI safety.
Deep neural network classifiers usually tend to incorrectly classify out-of-distribution (OOD) inputs into in-distribution classes with high confidence.
We propose an alternative probabilistic paradigm that is both practically useful and theoretically viable for the OOD detection tasks.
arXiv Detail & Related papers (2022-09-26T15:59:55Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.