Anomaly Detection Based on Multiple-Hypothesis Autoencoder
- URL: http://arxiv.org/abs/2107.08790v1
- Date: Wed, 7 Jul 2021 05:09:03 GMT
- Title: Anomaly Detection Based on Multiple-Hypothesis Autoencoder
- Authors: JoonSung Lee, YeongHyeon Park
- Abstract summary: A model trained with normal data generates a larger restoration error for abnormal data.
The restoration area for the input data of AE is limited in the latent space.
We propose Multiple-hypothesis Autoencoder(MH-AE) model composed of several decoders.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently Autoencoder(AE) based models are widely used in the field of anomaly
detection. A model trained with normal data generates a larger restoration
error for abnormal data. Whether or not abnormal data is determined by
observing the restoration error. It takes a lot of cost and time to obtain
abnormal data in the industrial field. Therefore the model trains only normal
data and detects abnormal data in the inference phase. However, the restoration
area for the input data of AE is limited in the latent space. To solve this
problem, we propose Multiple-hypothesis Autoencoder(MH-AE) model composed of
several decoders. MH-AE model increases the restoration area through contention
between decoders. The proposed method shows that the anomaly detection
performance is improved compared to the traditional AE for various input
datasets.
Related papers
- Constricting Normal Latent Space for Anomaly Detection with Normal-only Training Data [11.237938539765825]
Autoencoder (AE) is typically trained to reconstruct the data.
During test time, since AE is not trained using real anomalies, it is expected to poorly reconstruct the anomalous data.
We propose to limit the reconstruction capability of AE by introducing a novel latent constriction loss.
arXiv Detail & Related papers (2024-03-24T19:22:15Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection [16.436293069942312]
Autoencoders (AEs) often start reconstructing anomalies as well which depletes their anomaly detection performance.
We propose a temporal pseudo anomaly synthesizer that generates fake-anomalies using only normal data.
An AE is then trained to maximize the reconstruction loss on pseudo anomalies while minimizing this loss on normal data.
arXiv Detail & Related papers (2021-10-19T07:08:44Z) - Learning Not to Reconstruct Anomalies [14.632592282260363]
Autoencoder (AE) is trained to reconstruct the input with training set consisting only of normal data.
AE is then expected to well reconstruct the normal data while poorly reconstructing the anomalous data.
We propose a novel methodology to train AEs with the objective of reconstructing only normal data, regardless of the input.
arXiv Detail & Related papers (2021-10-19T05:22:38Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Discriminative-Generative Dual Memory Video Anomaly Detection [81.09977516403411]
Recently, people tried to use a few anomalies for video anomaly detection (VAD) instead of only normal data during the training process.
We propose a DiscRiminative-gEnerative duAl Memory (DREAM) anomaly detection model to take advantage of a few anomalies and solve data imbalance.
arXiv Detail & Related papers (2021-04-29T15:49:01Z) - Anomaly Detection with SDAE [2.9447568514391067]
A Simple, Deep, and Supervised Deep Autoencoder were trained and compared for anomaly detection over the ASHRAE building energy dataset.
The Deep Autoencoder perfoms the best, however the Supervised Deep Autoencoder outperforms the other models in total anomalies detected.
arXiv Detail & Related papers (2020-04-09T07:22:08Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.