Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models
- URL: http://arxiv.org/abs/2312.02615v1
- Date: Tue, 5 Dec 2023 09:44:47 GMT
- Title: Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models
- Authors: Sungik Choi, Hankook Lee, Honglak Lee, Moontae Lee
- Abstract summary: We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
- Score: 72.07462371883501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Novelty detection is a fundamental task of machine learning which aims to
detect abnormal ($\textit{i.e.}$ out-of-distribution (OOD)) samples. Since
diffusion models have recently emerged as the de facto standard generative
framework with surprising generation results, novelty detection via diffusion
models has also gained much attention. Recent methods have mainly utilized the
reconstruction property of in-distribution samples. However, they often suffer
from detecting OOD samples that share similar background information to the
in-distribution data. Based on our observation that diffusion models can
\emph{project} any sample to an in-distribution sample with similar background
information, we propose \emph{Projection Regret (PR)}, an efficient novelty
detection method that mitigates the bias of non-semantic information. To be
specific, PR computes the perceptual distance between the test image and its
diffusion-based projection to detect abnormality. Since the perceptual distance
often fails to capture semantic changes when the background information is
dominant, we cancel out the background bias by comparing it against recursive
projections. Extensive experiments demonstrate that PR outperforms the prior
art of generative-model-based novelty detection methods by a significant
margin.
Related papers
- Out-of-Distribution Detection with a Single Unconditional Diffusion Model [54.15132801131365]
Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples.
Traditionally, unsupervised methods utilize a deep generative model for OOD detection.
This paper explores whether a single model can perform OOD detection across diverse tasks.
arXiv Detail & Related papers (2024-05-20T08:54:03Z) - Improved off-policy training of diffusion samplers [93.66433483772055]
We study the problem of training diffusion models to sample from a distribution with an unnormalized density or energy function.
We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods.
Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work.
arXiv Detail & Related papers (2024-02-07T18:51:49Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Likelihood-based Out-of-Distribution Detection with Denoising Diffusion
Probabilistic Models [6.554019613111897]
We show that likelihood-based Out-of-Distribution detection can be extended to diffusion models.
We propose a new likelihood ratio for Out-of-Distribution detection with Deep Denoising Diffusion Models.
arXiv Detail & Related papers (2023-10-26T14:40:30Z) - Denoising diffusion models for out-of-distribution detection [2.113925122479677]
We exploit the view of denoising probabilistic diffusion models (DDPM) as denoising autoencoders.
We use DDPMs to reconstruct an input that has been noised to a range of noise levels, and use the resulting multi-dimensional reconstruction error to classify out-of-distribution inputs.
arXiv Detail & Related papers (2022-11-14T20:35:11Z) - Anomaly Detection with Test Time Augmentation and Consistency Evaluation [13.709281244889691]
We propose a simple, yet effective anomaly detection algorithm named Test Time Augmentation Anomaly Detection (TTA-AD)
We observe that in-distribution data enjoy more consistent predictions for its original and augmented versions on a trained network than out-distribution data.
Experiments on various high-resolution image benchmark datasets demonstrate that TTA-AD achieves comparable or better detection performance.
arXiv Detail & Related papers (2022-06-06T04:27:06Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - Unsupervised Lesion Detection via Image Restoration with a Normative
Prior [6.495883501989547]
We propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation.
Experiments with gliomas and stroke lesions in brain MRI show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin.
arXiv Detail & Related papers (2020-04-30T18:03:18Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.