Old is Gold: Redefining the Adversarially Learned One-Class Classifier
Training Paradigm
- URL: http://arxiv.org/abs/2004.07657v4
- Date: Fri, 19 Jun 2020 08:06:34 GMT
- Title: Old is Gold: Redefining the Adversarially Learned One-Class Classifier
Training Paradigm
- Authors: Muhammad Zaigham Zaheer, Jin-ha Lee, Marcella Astrid, Seung-Ik Lee
- Abstract summary: A popular method for anomaly detection is to use the generator of an adversarial network to formulate anomaly scores.
We propose a framework that effectively generates stable results across a wide range of training steps.
Our model achieves a frame-level AUC of 98.1%, surpassing recent state-of-the-art methods.
- Score: 15.898383112569237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A popular method for anomaly detection is to use the generator of an
adversarial network to formulate anomaly scores over reconstruction loss of
input. Due to the rare occurrence of anomalies, optimizing such networks can be
a cumbersome task. Another possible approach is to use both generator and
discriminator for anomaly detection. However, attributed to the involvement of
adversarial training, this model is often unstable in a way that the
performance fluctuates drastically with each training step. In this study, we
propose a framework that effectively generates stable results across a wide
range of training steps and allows us to use both the generator and the
discriminator of an adversarial model for efficient and robust anomaly
detection. Our approach transforms the fundamental role of a discriminator from
identifying real and fake data to distinguishing between good and bad quality
reconstructions. To this end, we prepare training examples for the good quality
reconstruction by employing the current generator, whereas poor quality
examples are obtained by utilizing an old state of the same generator. This
way, the discriminator learns to detect subtle distortions that often appear in
reconstructions of the anomaly inputs. Extensive experiments performed on
Caltech-256 and MNIST image datasets for novelty detection show superior
results. Furthermore, on UCSD Ped2 video dataset for anomaly detection, our
model achieves a frame-level AUC of 98.1%, surpassing recent state-of-the-art
methods.
Related papers
- ToCoAD: Two-Stage Contrastive Learning for Industrial Anomaly Detection [10.241033980055695]
This paper presents a two-stage training strategy, called textbfToCoAD.
In the first stage, a discriminative network is trained by using synthetic anomalies in a self-supervised learning manner.
This network is then utilized in the second stage to provide a negative feature guide, aiding in the training of the feature extractor through bootstrap contrastive learning.
arXiv Detail & Related papers (2024-07-01T14:19:36Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN [4.5123329001179275]
This study presents an adversarial method for anomaly detection in real-world applications, leveraging the power of generative adversarial neural networks (GANs)
Previous methods suffer from the high variance between class-wise accuracy which leads to not being applicable for all types of anomalies.
The proposed method named RCALAD tries to solve this problem by introducing a novel discriminator to the structure, which results in a more efficient training process.
arXiv Detail & Related papers (2023-04-16T13:05:39Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Stabilizing Adversarially Learned One-Class Novelty Detection Using
Pseudo Anomalies [22.48845887819345]
anomaly scores have been formulated using reconstruction loss of the adversarially learned generators and/or classification loss of discriminators.
Unavailability of anomaly examples in the training data makes optimization of such networks challenging.
We propose a robust anomaly detection framework that overcomes such instability by transforming the fundamental role of the discriminator from identifying real vs. fake data to distinguishing good vs. bad quality reconstructions.
arXiv Detail & Related papers (2022-03-25T15:37:52Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z) - G2D: Generate to Detect Anomaly [10.977404378308817]
We learn two deep neural networks (generator and discriminator) in a GAN-style setting on merely the normal samples.
In the training phase, when the generator fails to produce normal data, it can be considered as an irregularity generator.
We train a binary classifier on the generated anomalous samples along with the normal instances in order to be capable of detecting irregularities.
arXiv Detail & Related papers (2020-06-20T18:02:50Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - Regularized Cycle Consistent Generative Adversarial Network for Anomaly
Detection [5.457279006229213]
We propose a new Regularized Cycle Consistent Generative Adversarial Network (RCGAN) in which deep neural networks are adversarially trained to better recognize anomalous samples.
Experimental results on both real-world and synthetic data show that our model leads to significant and consistent improvements on previous anomaly detection benchmarks.
arXiv Detail & Related papers (2020-01-18T03:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.