A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video
- URL: http://arxiv.org/abs/2008.12328v5
- Date: Thu, 6 Apr 2023 15:49:54 GMT
- Title: A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video
- Authors: Mariana-Iuliana Georgescu, Radu Tudor Ionescu, Fahad Shahbaz Khan,
Marius Popescu and Mubarak Shah
- Abstract summary: Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years.
We propose a background-agnostic framework that learns from training videos containing only normal events.
- Score: 120.18562044084678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abnormal event detection in video is a complex computer vision problem that
has attracted significant attention in recent years. The complexity of the task
arises from the commonly-adopted definition of an abnormal event, that is, a
rarely occurring event that typically depends on the surrounding context.
Following the standard formulation of abnormal event detection as outlier
detection, we propose a background-agnostic framework that learns from training
videos containing only normal events. Our framework is composed of an object
detector, a set of appearance and motion auto-encoders, and a set of
classifiers. Since our framework only looks at object detections, it can be
applied to different scenes, provided that normal events are defined
identically across scenes and that the single main factor of variation is the
background. To overcome the lack of abnormal data during training, we propose
an adversarial learning strategy for the auto-encoders. We create a
scene-agnostic set of out-of-domain pseudo-abnormal examples, which are
correctly reconstructed by the auto-encoders before applying gradient ascent on
the pseudo-abnormal examples. We further utilize the pseudo-abnormal examples
to serve as abnormal examples when training appearance-based and motion-based
binary classifiers to discriminate between normal and abnormal latent features
and reconstructions. We compare our framework with the state-of-the-art methods
on four benchmark data sets, using various evaluation metrics. Compared to
existing methods, the empirical results indicate that our approach achieves
favorable performance on all data sets. In addition, we provide region-based
and track-based annotations for two large-scale abnormal event detection data
sets from the literature, namely ShanghaiTech and Subway.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection [30.679012320439625]
AnomalyCLIP learns object-agnostic text prompts to capture generic normality and abnormality in an image.
It achieves superior zero-shot performance of detecting and segmenting anomalies in datasets of highly diverse class semantics.
arXiv Detail & Related papers (2023-10-29T10:03:49Z) - Object-centric and memory-guided normality reconstruction for video
anomaly detection [56.64792194894702]
This paper addresses anomaly detection problem for videosurveillance.
Due to the inherent rarity and heterogeneity of abnormal events, the problem is viewed as a normality modeling strategy.
Our model learns object-centric normal patterns without seeing anomalous samples during training.
arXiv Detail & Related papers (2022-03-07T19:28:39Z) - Anomaly Crossing: A New Method for Video Anomaly Detection as
Cross-domain Few-shot Learning [32.0713939637202]
Video anomaly detection aims to identify abnormal events that occurred in videos.
Most previous approaches learn only from normal videos using unsupervised or semi-supervised methods.
We propose a new learning paradigm by making full use of both normal and abnormal videos for video anomaly detection.
arXiv Detail & Related papers (2021-12-12T20:49:38Z) - UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection [103.06327681038304]
We propose a supervised open-set benchmark composed of multiple virtual scenes for video anomaly detection.
Unlike existing data sets, we introduce abnormal events annotated at the pixel level at training time.
We show that UBnormal can enhance the performance of a state-of-the-art anomaly detection framework.
arXiv Detail & Related papers (2021-11-16T17:28:46Z) - Reliable Shot Identification for Complex Event Detection via
Visual-Semantic Embedding [72.9370352430965]
We propose a visual-semantic guided loss method for event detection in videos.
Motivated by curriculum learning, we introduce a negative elastic regularization term to start training the classifier with instances of high reliability.
An alternative optimization algorithm is developed to solve the proposed challenging non-net regularization problem.
arXiv Detail & Related papers (2021-10-12T11:46:56Z) - Unsupervised Video Anomaly Detection via Normalizing Flows with Implicit
Latent Features [8.407188666535506]
Most existing methods use an autoencoder to learn to reconstruct normal videos.
We propose an implicit two-path AE (ITAE), a structure in which two encoders implicitly model appearance and motion features.
For the complex distribution of normal scenes, we suggest normal density estimation of ITAE features.
NF models intensify ITAE performance by learning normality through implicitly learned features.
arXiv Detail & Related papers (2020-10-15T05:02:02Z) - Learning Memory-guided Normality for Anomaly Detection [33.77435699029528]
We present an unsupervised learning approach to anomaly detection that considers the diversity of normal patterns explicitly.
We also present novel feature compactness and separateness losses to train the memory, boosting the discriminative power of both memory items and deeply learned features from normal data.
arXiv Detail & Related papers (2020-03-30T05:30:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.