Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection
- URL: http://arxiv.org/abs/2110.09768v1
- Date: Tue, 19 Oct 2021 07:08:44 GMT
- Title: Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection
- Authors: Marcella Astrid, Muhammad Zaigham Zaheer, Seung-Ik Lee
- Abstract summary: Autoencoders (AEs) often start reconstructing anomalies as well which depletes their anomaly detection performance.
We propose a temporal pseudo anomaly synthesizer that generates fake-anomalies using only normal data.
An AE is then trained to maximize the reconstruction loss on pseudo anomalies while minimizing this loss on normal data.
- Score: 16.436293069942312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the limited availability of anomaly examples, video anomaly detection
is often seen as one-class classification (OCC) problem. A popular way to
tackle this problem is by utilizing an autoencoder (AE) trained only on normal
data. At test time, the AE is then expected to reconstruct the normal input
well while reconstructing the anomalies poorly. However, several studies show
that, even with normal data only training, AEs can often start reconstructing
anomalies as well which depletes their anomaly detection performance. To
mitigate this, we propose a temporal pseudo anomaly synthesizer that generates
fake-anomalies using only normal data. An AE is then trained to maximize the
reconstruction loss on pseudo anomalies while minimizing this loss on normal
data. This way, the AE is encouraged to produce distinguishable reconstructions
for normal and anomalous frames. Extensive experiments and analysis on three
challenging video anomaly datasets demonstrate the effectiveness of our
approach to improve the basic AEs in achieving superiority against several
existing state-of-the-art models.
Related papers
- Exploiting Autoencoder's Weakness to Generate Pseudo Anomalies [17.342474659784823]
A typical approach to anomaly detection is to train an autoencoder (AE) with normal data only so that it learns the patterns or representations of the normal data.
We propose creating pseudo anomalies from learned adaptive noise by exploiting the weakness of AE, i.e., reconstructing anomalies too well.
arXiv Detail & Related papers (2024-05-09T16:22:24Z) - Constricting Normal Latent Space for Anomaly Detection with Normal-only Training Data [11.237938539765825]
Autoencoder (AE) is typically trained to reconstruct the data.
During test time, since AE is not trained using real anomalies, it is expected to poorly reconstruct the anomalous data.
We propose to limit the reconstruction capability of AE by introducing a novel latent constriction loss.
arXiv Detail & Related papers (2024-03-24T19:22:15Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - PseudoBound: Limiting the anomaly reconstruction capability of one-class
classifiers using pseudo anomalies [13.14903445595385]
Autoencoder (AE) is trained to reconstruct the normal only training data with the expectation that, in test time, it can poorly reconstruct the anomalous data.
We propose to limit the anomaly reconstruction capability of AEs by incorporating pseudo anomalies during the training of an AE.
We demonstrate the effectiveness of our proposed pseudo anomaly based training approach against several existing state-ofthe-art (SOTA) methods on three benchmark video anomaly datasets.
arXiv Detail & Related papers (2023-03-19T16:19:13Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Learning Not to Reconstruct Anomalies [14.632592282260363]
Autoencoder (AE) is trained to reconstruct the input with training set consisting only of normal data.
AE is then expected to well reconstruct the normal data while poorly reconstructing the anomalous data.
We propose a novel methodology to train AEs with the objective of reconstructing only normal data, regardless of the input.
arXiv Detail & Related papers (2021-10-19T05:22:38Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.