DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection using
Pre-trained Diffusion Models
- URL: http://arxiv.org/abs/2308.07687v2
- Date: Wed, 16 Aug 2023 05:24:46 GMT
- Title: DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection using
Pre-trained Diffusion Models
- Authors: Ruiyuan Gao, Chenchen Zhao, Lanqing Hong, Qiang Xu
- Abstract summary: We use pre-trained diffusion models for semantic mismatch-guided OOD detection, named DiffGuard.
Experiments show that DiffGuard is effective on both Cifar-10 and hard cases of the large-scale ImageNet.
It can be easily combined with existing OOD detection techniques to achieve state-of-the-art OOD detection results.
- Score: 25.58447344260747
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given a classifier, the inherent property of semantic Out-of-Distribution
(OOD) samples is that their contents differ from all legal classes in terms of
semantics, namely semantic mismatch. There is a recent work that directly
applies it to OOD detection, which employs a conditional Generative Adversarial
Network (cGAN) to enlarge semantic mismatch in the image space. While achieving
remarkable OOD detection performance on small datasets, it is not applicable to
ImageNet-scale datasets due to the difficulty in training cGANs with both input
images and labels as conditions. As diffusion models are much easier to train
and amenable to various conditions compared to cGANs, in this work, we propose
to directly use pre-trained diffusion models for semantic mismatch-guided OOD
detection, named DiffGuard. Specifically, given an OOD input image and the
predicted label from the classifier, we try to enlarge the semantic difference
between the reconstructed OOD image under these conditions and the original
input image. We also present several test-time techniques to further strengthen
such differences. Experimental results show that DiffGuard is effective on both
Cifar-10 and hard cases of the large-scale ImageNet, and it can be easily
combined with existing OOD detection techniques to achieve state-of-the-art OOD
detection results.
Related papers
- Can Your Generative Model Detect Out-of-Distribution Covariate Shift? [2.0144831048903566]
We propose a novel method for detecting Out-of-Distribution (OOD) sensory data using conditional Normalizing Flows (cNFs)
Our results on CIFAR10 vs. CIFAR10-C and ImageNet200 vs. ImageNet200-C demonstrate the effectiveness of the method.
arXiv Detail & Related papers (2024-09-04T19:27:56Z) - Exploiting Diffusion Prior for Out-of-Distribution Detection [11.11093497717038]
Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models.
We present a novel approach for OOD detection that leverages the generative ability of diffusion models and the powerful feature extraction capabilities of CLIP.
arXiv Detail & Related papers (2024-06-16T23:55:25Z) - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Semantically Coherent Out-of-Distribution Detection [26.224146828317277]
Current out-of-distribution (OOD) detection benchmarks are commonly built by defining one dataset as in-distribution (ID) and all others as OOD.
We re-design the benchmarks and propose the semantically coherent out-of-distribution detection (SC-OOD)
Our approach achieves state-of-the-art performance on SC-OOD benchmarks.
arXiv Detail & Related papers (2021-08-26T17:53:32Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.