OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample
Generation on the Boundary
- URL: http://arxiv.org/abs/2110.15273v1
- Date: Thu, 28 Oct 2021 16:35:30 GMT
- Title: OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample
Generation on the Boundary
- Authors: Nikolaos Dionelis
- Abstract summary: Generative models set high likelihood and low reconstruction loss to Out-of-Distribution (OoD) samples.
OMASGAN generates, in a negative data augmentation manner, anomalous samples on the estimated distribution boundary.
OMASGAN performs retraining by including the abnormal minimum-anomaly-score OoD samples generated on the distribution boundary.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models trained in an unsupervised manner may set high likelihood
and low reconstruction loss to Out-of-Distribution (OoD) samples. This
increases Type II errors and leads to missed anomalies, overall decreasing
Anomaly Detection (AD) performance. In addition, AD models underperform due to
the rarity of anomalies. To address these limitations, we propose the OoD
Minimum Anomaly Score GAN (OMASGAN). OMASGAN generates, in a negative data
augmentation manner, anomalous samples on the estimated distribution boundary.
These samples are then used to refine an AD model, leading to more accurate
estimation of the underlying data distribution including multimodal supports
with disconnected modes. OMASGAN performs retraining by including the abnormal
minimum-anomaly-score OoD samples generated on the distribution boundary in a
self-supervised learning manner. For inference, for AD, we devise a
discriminator which is trained with negative and positive samples either
generated (negative or positive) or real (only positive). OMASGAN addresses the
rarity of anomalies by generating strong and adversarial OoD samples on the
distribution boundary using only normal class data, effectively addressing mode
collapse. A key characteristic of our model is that it uses any f-divergence
distribution metric in its variational representation, not requiring
invertibility. OMASGAN does not use feature engineering and makes no
assumptions about the data distribution. The evaluation of OMASGAN on image
data using the leave-one-out methodology shows that it achieves an improvement
of at least 0.24 and 0.07 points in AUROC on average on the MNIST and CIFAR-10
datasets, respectively, over other benchmark and state-of-the-art models for
AD.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Leveraging Latent Diffusion Models for Training-Free In-Distribution Data Augmentation for Surface Defect Detection [9.784793380119806]
We introduce DIAG, a training-free Diffusion-based In-distribution Anomaly Generation pipeline for data augmentation.
Unlike conventional image generation techniques, we implement a human-in-the-loop pipeline, where domain experts provide multimodal guidance to the model.
We demonstrate the efficacy and versatility of DIAG with respect to state-of-the-art data augmentation approaches on the challenging KSDD2 dataset.
arXiv Detail & Related papers (2024-07-04T14:28:52Z) - GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - COFT-AD: COntrastive Fine-Tuning for Few-Shot Anomaly Detection [19.946344683965425]
We propose a novel methodology to address the challenge of FSAD.
We employ a model pre-trained on a large source dataset to model weights.
We evaluate few-shot anomaly detection on on 3 controlled AD tasks and 4 real-world AD tasks to demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-02-29T09:48:19Z) - Invariant Anomaly Detection under Distribution Shifts: A Causal
Perspective [6.845698872290768]
Anomaly detection (AD) is the machine learning task of identifying highly discrepant abnormal samples.
Under the constraints of a distribution shift, the assumption that training samples and test samples are drawn from the same distribution breaks down.
We attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
arXiv Detail & Related papers (2023-12-21T23:20:47Z) - Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution [67.9215891673174]
We propose score entropy as a novel loss that naturally extends score matching to discrete spaces.
We test our Score Entropy Discrete Diffusion models on standard language modeling tasks.
arXiv Detail & Related papers (2023-10-25T17:59:12Z) - MSFlow: Multi-Scale Flow-based Framework for Unsupervised Anomaly
Detection [124.52227588930543]
Unsupervised anomaly detection (UAD) attracts a lot of research interest and drives widespread applications.
An inconspicuous yet powerful statistics model, the normalizing flows, is appropriate for anomaly detection and localization in an unsupervised fashion.
We propose a novel Multi-Scale Flow-based framework dubbed MSFlow composed of asymmetrical parallel flows followed by a fusion flow.
Our MSFlow achieves a new state-of-the-art with a detection AUORC score of up to 99.7%, localization AUCROC score of 98.8%, and PRO score of 97.1%.
arXiv Detail & Related papers (2023-08-29T13:38:35Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - UQGAN: A Unified Model for Uncertainty Quantification of Deep
Classifiers trained via Conditional GANs [9.496524884855559]
We present an approach to quantifying uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs)
Instead of shielding the entire in-distribution data with GAN generated OoD examples, we shield each class separately with out-of-class examples generated by a conditional GAN.
In particular, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers.
arXiv Detail & Related papers (2022-01-31T14:42:35Z) - Tail of Distribution GAN (TailGAN): Generative-
Adversarial-Network-Based Boundary Formation [0.0]
We create a GAN-based tail formation model for anomaly detection, the Tail of distribution GAN (TailGAN)
Using TailGAN, we leverage GANs for anomaly detection and use maximum entropy regularization.
We evaluate TailGAN for identifying Out-of-Distribution (OoD) data and its performance evaluated on MNIST, CIFAR-10, Baggage X-Ray, and OoD data shows competitiveness compared to methods from the literature.
arXiv Detail & Related papers (2021-07-24T17:29:21Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.