Anomaly Detection with Prototype-Guided Discriminative Latent Embeddings
- URL: http://arxiv.org/abs/2104.14945v1
- Date: Fri, 30 Apr 2021 12:16:52 GMT
- Title: Anomaly Detection with Prototype-Guided Discriminative Latent Embeddings
- Authors: Yuandu Lai, Yahong Han
- Abstract summary: We present a novel approach for anomaly detection, which utilizes discriminative prototypes of normal data to reconstruct video frames.
In this way, the model will favor the reconstruction of normal events and distort the reconstruction of abnormal events.
We evaluate the effectiveness of our method on three benchmark datasets and experimental results demonstrate the proposed method outperforms the state-of-the-art.
- Score: 29.93983580779689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent efforts towards video anomaly detection try to learn a deep
autoencoder to describe normal event patterns with small reconstruction errors.
The video inputs with large reconstruction errors are regarded as anomalies at
the test time. However, these methods sometimes reconstruct abnormal inputs
well because of the powerful generalization ability of deep autoencoder. To
address this problem, we present a novel approach for anomaly detection, which
utilizes discriminative prototypes of normal data to reconstruct video frames.
In this way, the model will favor the reconstruction of normal events and
distort the reconstruction of abnormal events. Specifically, we use a
prototype-guided memory module to perform discriminative latent embedding. We
introduce a new discriminative criterion for the memory module, as well as a
loss function correspondingly, which can encourage memory items to record the
representative embeddings of normal data, i.e. prototypes. Besides, we design a
novel two-branch autoencoder, which is composed of a future frame prediction
network and an RGB difference generation network that share the same encoder.
The stacked RGB difference contains motion information just like optical flow,
so our model can learn temporal regularity. We evaluate the effectiveness of
our method on three benchmark datasets and experimental results demonstrate the
proposed method outperforms the state-of-the-art.
Related papers
- A brief introduction to a framework named Multilevel Guidance-Exploration Network [23.794585834150983]
We propose a novel framework called the Multilevel Guidance-Exploration Network(MGENet), which detects anomalies through the difference in high-level representation between the Guidance and Exploration network.
Specifically, we first utilize the pre-trained Normalizing Flow that takes skeletal keypoints as input to guide an RGB encoder, which takes unmasked RGB frames as input, to explore motion latent features.
Our proposed method achieves state-of-the-art performance on ShanghaiTech and UBnormal datasets.
arXiv Detail & Related papers (2023-12-07T08:20:07Z) - Multi-level Memory-augmented Appearance-Motion Correspondence Framework
for Video Anomaly Detection [1.9511777443446219]
We propose a multi-level memory-augmented appearance-motion correspondence framework.
The latent correspondence between appearance and motion is explored via appearance-motion semantics alignment and semantics replacement training.
Our framework outperforms the state-of-the-art methods, achieving AUCs of 99.6%, 93.8%, and 76.3% on UCSD Ped2, CUHK Avenue, and ShanghaiTech datasets.
arXiv Detail & Related papers (2023-03-09T08:43:06Z) - Making Reconstruction-based Method Great Again for Video Anomaly
Detection [64.19326819088563]
Anomaly detection in videos is a significant yet challenging problem.
Existing reconstruction-based methods rely on old-fashioned convolutional autoencoders.
We propose a new autoencoder model for enhanced consecutive frame reconstruction.
arXiv Detail & Related papers (2023-01-28T01:57:57Z) - Visual Anomaly Detection Via Partition Memory Bank Module and Error
Estimation [28.100204573591505]
Reconstruction method based on the memory module for visual anomaly detection attempts to narrow the reconstruction error for normal samples while enlarging it for anomalous samples.
This work proposes a new unsupervised visual anomaly detection method to jointly learn effective normal features and eliminate unfavorable reconstruction errors.
To evaluate the effectiveness of the proposed method for anomaly detection and localization, extensive experiments are conducted on three widely-used anomaly detection datasets.
arXiv Detail & Related papers (2022-09-26T06:15:47Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Object-centric and memory-guided normality reconstruction for video
anomaly detection [56.64792194894702]
This paper addresses anomaly detection problem for videosurveillance.
Due to the inherent rarity and heterogeneity of abnormal events, the problem is viewed as a normality modeling strategy.
Our model learns object-centric normal patterns without seeing anomalous samples during training.
arXiv Detail & Related papers (2022-03-07T19:28:39Z) - Discriminative-Generative Dual Memory Video Anomaly Detection [81.09977516403411]
Recently, people tried to use a few anomalies for video anomaly detection (VAD) instead of only normal data during the training process.
We propose a DiscRiminative-gEnerative duAl Memory (DREAM) anomaly detection model to take advantage of a few anomalies and solve data imbalance.
arXiv Detail & Related papers (2021-04-29T15:49:01Z) - Improving unsupervised anomaly localization by applying multi-scale
memories to autoencoders [14.075973859711567]
MMAE.MMAE updates slots at corresponding resolution scale as prototype features during unsupervised learning.
For anomaly detection, we accomplish anomaly removal by replacing the original encoded image features at each scale with most relevant prototype features.
Experimental results on various datasets testify that our MMAE successfully removes anomalies at different scales and performs favorably on several datasets.
arXiv Detail & Related papers (2020-12-21T04:44:40Z) - Learning Memory-guided Normality for Anomaly Detection [33.77435699029528]
We present an unsupervised learning approach to anomaly detection that considers the diversity of normal patterns explicitly.
We also present novel feature compactness and separateness losses to train the memory, boosting the discriminative power of both memory items and deeply learned features from normal data.
arXiv Detail & Related papers (2020-03-30T05:30:09Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.