Exploiting Diffusion Prior for Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2406.11105v2
- Date: Wed, 21 Aug 2024 17:04:18 GMT
- Title: Exploiting Diffusion Prior for Out-of-Distribution Detection
- Authors: Armando Zhu, Jiabei Liu, Keqin Li, Shuying Dai, Bo Hong, Peng Zhao, Changsong Wei,
- Abstract summary: Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models.
We present a novel approach for OOD detection that leverages the generative ability of diffusion models and the powerful feature extraction capabilities of CLIP.
- Score: 11.11093497717038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is crucial for deploying robust machine learning models, especially in areas where security is critical. However, traditional OOD detection methods often fail to capture complex data distributions from large scale date. In this paper, we present a novel approach for OOD detection that leverages the generative ability of diffusion models and the powerful feature extraction capabilities of CLIP. By using these features as conditional inputs to a diffusion model, we can reconstruct the images after encoding them with CLIP. The difference between the original and reconstructed images is used as a signal for OOD identification. The practicality and scalability of our method is increased by the fact that it does not require class-specific labeled ID data, as is the case with many other methods. Extensive experiments on several benchmark datasets demonstrates the robustness and effectiveness of our method, which have significantly improved the detection accuracy.
Related papers
- Diffusion-based Layer-wise Semantic Reconstruction for Unsupervised Out-of-Distribution Detection [30.02748131967826]
Unsupervised out-of-distribution (OOD) detection aims to identify out-of-domain data by learning only from unlabeled In-Distribution (ID) training samples.
Current reconstruction-based methods provide a good alternative approach by measuring the reconstruction error between the input and its corresponding generative counterpart in the pixel/feature space.
We propose the diffusion-based layer-wise semantic reconstruction approach for unsupervised OOD detection.
arXiv Detail & Related papers (2024-11-16T04:54:07Z) - What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Can Your Generative Model Detect Out-of-Distribution Covariate Shift? [2.0144831048903566]
We propose a novel method for detecting Out-of-Distribution (OOD) sensory data using conditional Normalizing Flows (cNFs)
Our results on CIFAR10 vs. CIFAR10-C and ImageNet200 vs. ImageNet200-C demonstrate the effectiveness of the method.
arXiv Detail & Related papers (2024-09-04T19:27:56Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - A Functional Data Perspective and Baseline On Multi-Layer
Out-of-Distribution Detection [30.499548939422194]
Methods that explore the multiple layers either require a special architecture or a supervised objective to do so.
This work adopts an original approach based on a functional view of the network that exploits the sample's trajectories through the various layers and their statistical dependencies.
We validate our method and empirically demonstrate its effectiveness in OOD detection compared to strong state-of-the-art baselines on computer vision benchmarks.
arXiv Detail & Related papers (2023-06-06T09:14:05Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - Out-Of-Distribution Detection With Subspace Techniques And Probabilistic
Modeling Of Features [7.219077740523682]
This paper presents a principled approach for detecting out-of-distribution (OOD) samples in deep neural networks (DNN)
Modeling probability distributions on deep features has recently emerged as an effective, yet computationally cheap method to detect OOD samples in DNN.
We apply linear statistical dimensionality reduction techniques and nonlinear manifold-learning techniques on the high-dimensional features in order to capture the true subspace spanned by the features.
arXiv Detail & Related papers (2020-12-08T07:07:11Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.