SIO: Synthetic In-Distribution Data Benefits Out-of-Distribution
Detection
- URL: http://arxiv.org/abs/2303.14531v1
- Date: Sat, 25 Mar 2023 18:34:34 GMT
- Title: SIO: Synthetic In-Distribution Data Benefits Out-of-Distribution
Detection
- Authors: Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley,
Yiran Chen, Hai Li
- Abstract summary: We develop a data-driven approach to building reliable Out-of-Distribution (OOD) detectors.
We exploit the internal in-distribution (ID) training set by utilizing generative models to produce additional synthetic ID images.
Our training framework, which is termed SIO, serves as a "plug-and-play" technique that is designed to be compatible with existing and future OOD detection algorithms.
- Score: 34.97444309333315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building up reliable Out-of-Distribution (OOD) detectors is challenging,
often requiring the use of OOD data during training. In this work, we develop a
data-driven approach which is distinct and complementary to existing works:
Instead of using external OOD data, we fully exploit the internal
in-distribution (ID) training set by utilizing generative models to produce
additional synthetic ID images. The classifier is then trained using a novel
objective that computes weighted loss on real and synthetic ID samples
together. Our training framework, which is termed SIO, serves as a
"plug-and-play" technique that is designed to be compatible with existing and
future OOD detection algorithms, including the ones that leverage available OOD
training data. Our experiments on CIFAR-10, CIFAR-100, and ImageNet variants
demonstrate that SIO consistently improves the performance of nearly all
state-of-the-art (SOTA) OOD detection algorithms. For instance, on the
challenging CIFAR-10 v.s. CIFAR-100 detection problem, SIO improves the average
OOD detection AUROC of 18 existing methods from 86.25\% to 89.04\% and achieves
a new SOTA of 92.94\% according to the OpenOOD benchmark. Code is available at
https://github.com/zjysteven/SIO.
Related papers
- Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Logit Scaling for Out-of-Distribution Detection [13.017887434494373]
We propose a simple, post-hoc method that does not require access to the training data distribution.
Our method, Logit Scaling (LTS), scales the logits in a manner that effectively distinguishes between in-distribution (ID) and OOD samples.
We tested our method on benchmarks across various scales, including CIFAR-10, CIFAR-100, ImageNet and OpenOOD.
arXiv Detail & Related papers (2024-09-02T11:10:44Z) - OAL: Enhancing OOD Detection Using Latent Diffusion [5.357756138014614]
Outlier Aware Learning (OAL) framework synthesizes OOD training data directly in the latent space.
We introduce a mutual information-based contrastive learning approach that amplifies the distinction between In-Distribution (ID) and collected OOD features.
arXiv Detail & Related papers (2024-06-24T11:01:43Z) - Out-of-distribution Detection Learning with Unreliable
Out-of-distribution Sources [73.28967478098107]
Out-of-distribution (OOD) detection discerns OOD data where the predictor cannot make valid predictions as in-distribution (ID) data.
It is typically hard to collect real out-of-distribution (OOD) data for training a predictor capable of discerning OOD patterns.
We propose a data generation-based learning method named Auxiliary Task-based OOD Learning (ATOL) that can relieve the mistaken OOD generation.
arXiv Detail & Related papers (2023-11-06T16:26:52Z) - Out-of-distribution Object Detection through Bayesian Uncertainty
Estimation [10.985423935142832]
We propose a novel, intuitive, and scalable probabilistic object detection method for OOD detection.
Our method is able to distinguish between in-distribution (ID) data and OOD data via weight parameter sampling from proposed Gaussian distributions.
We demonstrate that our Bayesian object detector can achieve satisfactory OOD identification performance by reducing the FPR95 score by up to 8.19% and increasing the AUROC score by up to 13.94% when trained on BDD100k and VOC datasets.
arXiv Detail & Related papers (2023-10-29T19:10:52Z) - AUTO: Adaptive Outlier Optimization for Online Test-Time OOD Detection [81.49353397201887]
Out-of-distribution (OOD) detection is crucial to deploying machine learning models in open-world applications.
We introduce a novel paradigm called test-time OOD detection, which utilizes unlabeled online data directly at test time to improve OOD detection performance.
We propose adaptive outlier optimization (AUTO), which consists of an in-out-aware filter, an ID memory bank, and a semantically-consistent objective.
arXiv Detail & Related papers (2023-03-22T02:28:54Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.