VOS: Learning What You Don't Know by Virtual Outlier Synthesis
- URL: http://arxiv.org/abs/2202.01197v3
- Date: Fri, 4 Feb 2022 17:41:29 GMT
- Title: VOS: Learning What You Don't Know by Virtual Outlier Synthesis
- Authors: Xuefeng Du, Zhaoning Wang, Mu Cai, Yixuan Li
- Abstract summary: Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks.
Previous approaches rely on real outlier datasets for model regularization.
We present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers.
- Score: 23.67449949146439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection has received much attention lately due to
its importance in the safe deployment of neural networks. One of the key
challenges is that models lack supervision signals from unknown data, and as a
result, can produce overconfident predictions on OOD data. Previous approaches
rely on real outlier datasets for model regularization, which can be costly and
sometimes infeasible to obtain in practice. In this paper, we present VOS, a
novel framework for OOD detection by adaptively synthesizing virtual outliers
that can meaningfully regularize the model's decision boundary during training.
Specifically, VOS samples virtual outliers from the low-likelihood region of
the class-conditional distribution estimated in the feature space. Alongside,
we introduce a novel unknown-aware training objective, which contrastively
shapes the uncertainty space between the ID data and synthesized outlier data.
VOS achieves state-of-the-art performance on both object detection and image
classification models, reducing the FPR95 by up to 7.87% compared to the
previous best method. Code is available at
https://github.com/deeplearning-wisc/vos.
Related papers
- EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - LS-VOS: Identifying Outliers in 3D Object Detections Using Latent Space
Virtual Outlier Synthesis [10.920640666237833]
LiDAR-based 3D object detectors have achieved unprecedented speed and accuracy in autonomous driving applications.
They are often biased toward high-confidence predictions or return detections where no real object is present.
We propose LS-VOS, a framework for identifying outliers in 3D object detections.
arXiv Detail & Related papers (2023-10-02T07:44:26Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Non-Parametric Outlier Synthesis [35.20765580915213]
Out-of-distribution (OOD) detection is indispensable for safely deploying machine learning models in the wild.
We propose a novel framework, Non-Parametric Outlier Synthesis (NPOS), which generates artificial OOD training data.
We show that our synthesis approach can be mathematically interpreted as a rejection sampling framework.
arXiv Detail & Related papers (2023-03-06T08:51:00Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Out-of-Distribution Detection with Hilbert-Schmidt Independence
Optimization [114.43504951058796]
Outlier detection tasks have been playing a critical role in AI safety.
Deep neural network classifiers usually tend to incorrectly classify out-of-distribution (OOD) inputs into in-distribution classes with high confidence.
We propose an alternative probabilistic paradigm that is both practically useful and theoretically viable for the OOD detection tasks.
arXiv Detail & Related papers (2022-09-26T15:59:55Z) - EARLIN: Early Out-of-Distribution Detection for Resource-efficient
Collaborative Inference [4.826988182025783]
Collaborative inference enables resource-constrained edge devices to make inferences by uploading inputs to a server.
While this setup works cost-effectively for successful inferences, it severely underperforms when the model faces input samples on which the model was not trained.
We propose a novel lightweight OOD detection approach that mines important features from the shallow layers of a pretrained CNN model.
arXiv Detail & Related papers (2021-06-25T18:43:23Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Dense open-set recognition with synthetic outliers generated by Real NVP [1.278093617645299]
We consider an outlier detection approach based on discriminative training with jointly learned synthetic outliers.
We show that this approach can be adapted for simultaneous semantic segmentation and dense outlier detection.
Our models perform competitively with respect to the state of the art despite producing predictions with only one forward pass.
arXiv Detail & Related papers (2020-11-22T19:40:26Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.