Adversarial Denoising Diffusion Model for Unsupervised Anomaly Detection
- URL: http://arxiv.org/abs/2312.04382v1
- Date: Thu, 7 Dec 2023 15:51:19 GMT
- Title: Adversarial Denoising Diffusion Model for Unsupervised Anomaly Detection
- Authors: Jongmin Yu, Hyeontaek Oh, and Jinhong Yang
- Abstract summary: ADDM is based on the Denoising Diffusion Probabilistic Model (DDPM) but complementarily trained by adversarial learning.
We apply ADDM to anomaly detection in unsupervised MRI images.
- Score: 4.936226952764696
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose the Adversarial Denoising Diffusion Model (ADDM).
The ADDM is based on the Denoising Diffusion Probabilistic Model (DDPM) but
complementarily trained by adversarial learning. The proposed adversarial
learning is achieved by classifying model-based denoised samples and samples to
which random Gaussian noise is added to a specific sampling step. With the
addition of explicit adversarial learning on data samples, ADDM can learn the
semantic characteristics of the data more robustly during training, which
achieves a similar data sampling performance with much fewer sampling steps
than DDPM. We apply ADDM to anomaly detection in unsupervised MRI images.
Experimental results show that the proposed ADDM outperformed existing
generative model-based unsupervised anomaly detection methods. In particular,
compared to other DDPM-based anomaly detection methods, the proposed ADDM shows
better performance with the same number of sampling steps and similar
performance with 50% fewer sampling steps.
Related papers
- Angel or Devil: Discriminating Hard Samples and Anomaly Contaminations for Unsupervised Time Series Anomaly Detection [4.767887707515356]
Training in unsupervised time series anomaly detection is constantly plagued by the discrimination between harmful anomaly contaminations' and beneficial hard normal samples'
arXiv Detail & Related papers (2024-10-26T13:59:23Z) - Beyond Perceptual Distances: Rethinking Disparity Assessment for Out-of-Distribution Detection with Diffusion Models [28.96695036746856]
Out-of-Distribution (OoD) detection aims to justify whether a given sample is from the training distribution of the classifier-under-protection.
DM-based methods bring fresh insights to the field, yet remain under-explored.
Our work has demonstrated state-of-the-art detection performances among DM-based methods in extensive experiments.
arXiv Detail & Related papers (2024-09-16T08:50:47Z) - AdjointDPM: Adjoint Sensitivity Method for Gradient Backpropagation of Diffusion Probabilistic Models [103.41269503488546]
Existing customization methods require access to multiple reference examples to align pre-trained diffusion probabilistic models with user-provided concepts.
This paper aims to address the challenge of DPM customization when the only available supervision is a differentiable metric defined on the generated contents.
We propose a novel method AdjointDPM, which first generates new samples from diffusion models by solving the corresponding probability-flow ODEs.
It then uses the adjoint sensitivity method to backpropagate the gradients of the loss to the models' parameters.
arXiv Detail & Related papers (2023-07-20T09:06:21Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - On Diffusion Modeling for Anomaly Detection [14.542411354617983]
Diffusion models are attractive candidates for density-based anomaly detection.
We show that diffusion-based anomaly detection methods perform competitively for both semi-supervised and unsupervised settings.
These results establish diffusion-based anomaly detection as a scalable alternative to traditional methods.
arXiv Detail & Related papers (2023-05-29T20:19:45Z) - Detecting Adversarial Data by Probing Multiple Perturbations Using
Expected Perturbation Score [62.54911162109439]
Adversarial detection aims to determine whether a given sample is an adversarial one based on the discrepancy between natural and adversarial distributions.
We propose a new statistic called expected perturbation score (EPS), which is essentially the expected score of a sample after various perturbations.
We develop EPS-based maximum mean discrepancy (MMD) as a metric to measure the discrepancy between the test sample and natural samples.
arXiv Detail & Related papers (2023-05-25T13:14:58Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Denoising diffusion models for out-of-distribution detection [2.113925122479677]
We exploit the view of denoising probabilistic diffusion models (DDPM) as denoising autoencoders.
We use DDPMs to reconstruct an input that has been noised to a range of noise levels, and use the resulting multi-dimensional reconstruction error to classify out-of-distribution inputs.
arXiv Detail & Related papers (2022-11-14T20:35:11Z) - Accelerating Diffusion Models via Early Stop of the Diffusion Process [114.48426684994179]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved impressive performance on various generation tasks.
In practice DDPMs often need hundreds even thousands of denoising steps to obtain a high-quality sample.
We propose a principled acceleration strategy, referred to as Early-Stopped DDPM (ES-DDPM), for DDPMs.
arXiv Detail & Related papers (2022-05-25T06:40:09Z) - Denoising Diffusion Implicit Models [117.03720513930335]
We present denoising diffusion implicit models (DDIMs) for iterative implicit probabilistic models with the same training procedure as DDPMs.
DDIMs can produce high quality samples $10 times$ to $50 times$ faster in terms of wall-clock time compared to DDPMs.
arXiv Detail & Related papers (2020-10-06T06:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.