Out-of-Distribution Detection with a Single Unconditional Diffusion Model
- URL: http://arxiv.org/abs/2405.11881v3
- Date: Thu, 24 Oct 2024 02:17:00 GMT
- Title: Out-of-Distribution Detection with a Single Unconditional Diffusion Model
- Authors: Alvin Heng, Alexandre H. Thiery, Harold Soh,
- Abstract summary: Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples.
Traditionally, unsupervised methods utilize a deep generative model for OOD detection.
This paper explores whether a single model can perform OOD detection across diverse tasks.
- Score: 54.15132801131365
- License:
- Abstract: Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples. Traditionally, unsupervised methods utilize a deep generative model for OOD detection. However, such approaches require a new model to be trained for each inlier dataset. This paper explores whether a single model can perform OOD detection across diverse tasks. To that end, we introduce Diffusion Paths (DiffPath), which uses a single diffusion model originally trained to perform unconditional generation for OOD detection. We introduce a novel technique of measuring the rate-of-change and curvature of the diffusion paths connecting samples to the standard normal. Extensive experiments show that with a single model, DiffPath is competitive with prior work using individual models on a variety of OOD tasks involving different distributions. Our code is publicly available at https://github.com/clear-nus/diffpath.
Related papers
- Adapted-MoE: Mixture of Experts with Test-Time Adaption for Anomaly Detection [10.12283550685127]
We propose an Adapted-MoE to handle multiple distributions of same-category samples by divide and conquer.
Specifically, we propose a routing network based on representation learning to route same-category samples into the subclasses feature space.
We propose the test-time adaption to eliminate the bias between the unseen test sample representation and the feature distribution learned by the expert model.
arXiv Detail & Related papers (2024-09-09T13:49:09Z) - Deep Metric Learning-Based Out-of-Distribution Detection with Synthetic Outlier Exposure [0.0]
We propose a label-mixup approach to generate synthetic OOD data using Denoising Diffusion Probabilistic Models (DDPMs)
In the experiments, we found that metric learning-based loss functions perform better than the softmax.
Our approach outperforms strong baselines in conventional OOD detection metrics.
arXiv Detail & Related papers (2024-05-01T16:58:22Z) - COFT-AD: COntrastive Fine-Tuning for Few-Shot Anomaly Detection [19.946344683965425]
We propose a novel methodology to address the challenge of FSAD.
We employ a model pre-trained on a large source dataset to model weights.
We evaluate few-shot anomaly detection on on 3 controlled AD tasks and 4 real-world AD tasks to demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-02-29T09:48:19Z) - Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - Unsupervised Out-of-Distribution Detection by Restoring Lossy Inputs
with Variational Autoencoder [3.498694457257263]
We propose a novel VAE-based score called Error Reduction (ER) for OOD detection.
ER is based on a VAE that takes a lossy version of the training set as inputs and the original set as targets.
arXiv Detail & Related papers (2023-09-05T09:42:15Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Predicting Out-of-Distribution Error with the Projection Norm [87.61489137914693]
Projection Norm predicts a model's performance on out-of-distribution data without access to ground truth labels.
We find that Projection Norm is the only approach that achieves non-trivial detection performance on adversarial examples.
arXiv Detail & Related papers (2022-02-11T18:58:21Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.