Anomaly Detection with Adversarially Learned Perturbations of Latent
Space
- URL: http://arxiv.org/abs/2207.01106v1
- Date: Sun, 3 Jul 2022 19:32:00 GMT
- Title: Anomaly Detection with Adversarially Learned Perturbations of Latent
Space
- Authors: Vahid Reza Khazaie and Anthony Wong and John Taylor Jewell and Yalda
Mohsenzadeh
- Abstract summary: Anomaly detection is to identify samples that do not conform to the distribution of the normal data.
In this work, we have designed an adversarial framework consisting of two competing components, an Adversarial Distorter, and an Autoencoder.
The proposed method outperforms the existing state-of-the-art methods in anomaly detection on image and video datasets.
- Score: 9.473040033926264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anomaly detection is to identify samples that do not conform to the
distribution of the normal data. Due to the unavailability of anomalous data,
training a supervised deep neural network is a cumbersome task. As such,
unsupervised methods are preferred as a common approach to solve this task.
Deep autoencoders have been broadly adopted as a base of many unsupervised
anomaly detection methods. However, a notable shortcoming of deep autoencoders
is that they provide insufficient representations for anomaly detection by
generalizing to reconstruct outliers. In this work, we have designed an
adversarial framework consisting of two competing components, an Adversarial
Distorter, and an Autoencoder. The Adversarial Distorter is a convolutional
encoder that learns to produce effective perturbations and the autoencoder is a
deep convolutional neural network that aims to reconstruct the images from the
perturbed latent feature space. The networks are trained with opposing goals in
which the Adversarial Distorter produces perturbations that are applied to the
encoder's latent feature space to maximize the reconstruction error and the
autoencoder tries to neutralize the effect of these perturbations to minimize
it. When applied to anomaly detection, the proposed method learns semantically
richer representations due to applying perturbations to the feature space. The
proposed method outperforms the existing state-of-the-art methods in anomaly
detection on image and video datasets.
Related papers
- Abnormal Event Detection In Videos Using Deep Embedding [0.0]
Abnormal event detection or anomaly detection in surveillance videos is currently a challenge because of the diversity of possible events.
We propose an unsupervised approach for video anomaly detection with the aim to jointly optimize the objectives of the deep neural network.
arXiv Detail & Related papers (2024-09-15T17:44:51Z) - GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features [68.14842693208465]
GeneralAD is an anomaly detection framework designed to operate in semantic, near-distribution, and industrial settings.
We propose a novel self-supervised anomaly generation module that employs straightforward operations like noise addition and shuffling to patch features.
We extensively evaluated our approach on ten datasets, achieving state-of-the-art results in six and on-par performance in the remaining.
arXiv Detail & Related papers (2024-07-17T09:27:41Z) - A Hierarchically Feature Reconstructed Autoencoder for Unsupervised Anomaly Detection [8.512184778338806]
It consists of a well pre-trained encoder to extract hierarchical feature representations and a decoder to reconstruct these intermediate features from the encoder.
The anomalies can be detected when the decoder fails to reconstruct features well, and then errors of hierarchical feature reconstruction are aggregated into an anomaly map to achieve anomaly localization.
Experiment results show that the proposed method outperforms the state-of-the-art methods on MNIST, Fashion-MNIST, CIFAR-10, and MVTec Anomaly Detection datasets.
arXiv Detail & Related papers (2024-05-15T07:20:27Z) - Targeted collapse regularized autoencoder for anomaly detection: black hole at the center [3.924781781769534]
Autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some anomalous samples.
We propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term.
This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.
arXiv Detail & Related papers (2023-06-22T01:33:47Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Representation Learning for Content-Sensitive Anomaly Detection in
Industrial Networks [0.0]
This thesis proposes a framework to learn spatial-temporal aspects of raw network traffic in an unsupervised and protocol-agnostic manner.
The learned representations are used to measure the effect on the results of a subsequent anomaly detection.
arXiv Detail & Related papers (2022-04-20T09:22:41Z) - Feature Encoding with AutoEncoders for Weakly-supervised Anomaly
Detection [46.76220474310698]
Weakly-supervised anomaly detection aims at learning an anomaly detector from a limited amount of labeled data and abundant unlabeled data.
Recent works build deep neural networks for anomaly detection by discriminatively mapping the normal samples and abnormal samples to different regions in the feature space or fitting different distributions.
This paper proposes a novel strategy to transform the input data into a more meaningful representation that could be used for anomaly detection.
arXiv Detail & Related papers (2021-05-22T16:23:05Z) - ESAD: End-to-end Deep Semi-supervised Anomaly Detection [85.81138474858197]
We propose a new objective function that measures the KL-divergence between normal and anomalous data.
The proposed method significantly outperforms several state-of-the-arts on multiple benchmark datasets.
arXiv Detail & Related papers (2020-12-09T08:16:35Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.