Constricting Normal Latent Space for Anomaly Detection with Normal-only Training Data
- URL: http://arxiv.org/abs/2403.16270v1
- Date: Sun, 24 Mar 2024 19:22:15 GMT
- Title: Constricting Normal Latent Space for Anomaly Detection with Normal-only Training Data
- Authors: Marcella Astrid, Muhammad Zaigham Zaheer, Seung-Ik Lee,
- Abstract summary: Autoencoder (AE) is typically trained to reconstruct the data.
During test time, since AE is not trained using real anomalies, it is expected to poorly reconstruct the anomalous data.
We propose to limit the reconstruction capability of AE by introducing a novel latent constriction loss.
- Score: 11.237938539765825
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In order to devise an anomaly detection model using only normal training data, an autoencoder (AE) is typically trained to reconstruct the data. As a result, the AE can extract normal representations in its latent space. During test time, since AE is not trained using real anomalies, it is expected to poorly reconstruct the anomalous data. However, several researchers have observed that it is not the case. In this work, we propose to limit the reconstruction capability of AE by introducing a novel latent constriction loss, which is added to the existing reconstruction loss. By using our method, no extra computational cost is added to the AE during test time. Evaluations using three video anomaly detection benchmark datasets, i.e., Ped2, Avenue, and ShanghaiTech, demonstrate the effectiveness of our method in limiting the reconstruction capability of AE, which leads to a better anomaly detection model.
Related papers
- Exploiting Autoencoder's Weakness to Generate Pseudo Anomalies [17.342474659784823]
A typical approach to anomaly detection is to train an autoencoder (AE) with normal data only so that it learns the patterns or representations of the normal data.
We propose creating pseudo anomalies from learned adaptive noise by exploiting the weakness of AE, i.e., reconstructing anomalies too well.
arXiv Detail & Related papers (2024-05-09T16:22:24Z) - DMAD: Dual Memory Bank for Real-World Anomaly Detection [90.97573828481832]
We propose a new framework named Dual Memory bank enhanced representation learning for Anomaly Detection (DMAD)
DMAD employs a dual memory bank to calculate feature distance and feature attention between normal and abnormal patterns.
We evaluate DMAD on the MVTec-AD and VisA datasets.
arXiv Detail & Related papers (2024-03-19T02:16:32Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - LARA: A Light and Anti-overfitting Retraining Approach for Unsupervised
Time Series Anomaly Detection [49.52429991848581]
We propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs)
This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; and 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones.
arXiv Detail & Related papers (2023-10-09T12:36:16Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - A Subspace Projection Approach to Autoencoder-based Anomaly Detection [45.37038692092683]
Autoencoder (AE) is a neural network architecture that is trained to reconstruct an input at its output.
We propose a novel framework of AE-based anomaly detection, coined HFR-AE, by projecting new inputs into a subspace.
arXiv Detail & Related papers (2023-02-15T13:23:09Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection [16.436293069942312]
Autoencoders (AEs) often start reconstructing anomalies as well which depletes their anomaly detection performance.
We propose a temporal pseudo anomaly synthesizer that generates fake-anomalies using only normal data.
An AE is then trained to maximize the reconstruction loss on pseudo anomalies while minimizing this loss on normal data.
arXiv Detail & Related papers (2021-10-19T07:08:44Z) - Learning Not to Reconstruct Anomalies [14.632592282260363]
Autoencoder (AE) is trained to reconstruct the input with training set consisting only of normal data.
AE is then expected to well reconstruct the normal data while poorly reconstructing the anomalous data.
We propose a novel methodology to train AEs with the objective of reconstructing only normal data, regardless of the input.
arXiv Detail & Related papers (2021-10-19T05:22:38Z) - ARAE: Adversarially Robust Training of Autoencoders Improves Novelty
Detection [6.992807725367106]
Autoencoders (AE) have been widely employed to approach the novelty detection problem.
We propose a novel AE that can learn more semantically meaningful features.
We show that despite using a much simpler architecture, the proposed AE outperforms or is competitive to state-of-the-art on three benchmark datasets.
arXiv Detail & Related papers (2020-03-12T09:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.