Self-trained Deep Ordinal Regression for End-to-End Video Anomaly
Detection
- URL: http://arxiv.org/abs/2003.06780v1
- Date: Sun, 15 Mar 2020 08:44:55 GMT
- Title: Self-trained Deep Ordinal Regression for End-to-End Video Anomaly
Detection
- Authors: Guansong Pang, Cheng Yan, Chunhua Shen, Anton van den Hengel, Xiao Bai
- Abstract summary: We show that applying self-trained deep ordinal regression to video anomaly detection overcomes two key limitations of existing methods.
We devise an end-to-end trainable video anomaly detection approach that enables joint representation learning and anomaly scoring without manually labeled normal/abnormal data.
- Score: 114.9714355807607
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Video anomaly detection is of critical practical importance to a variety of
real applications because it allows human attention to be focused on events
that are likely to be of interest, in spite of an otherwise overwhelming volume
of video. We show that applying self-trained deep ordinal regression to video
anomaly detection overcomes two key limitations of existing methods, namely, 1)
being highly dependent on manually labeled normal training data; and 2)
sub-optimal feature learning. By formulating a surrogate two-class ordinal
regression task we devise an end-to-end trainable video anomaly detection
approach that enables joint representation learning and anomaly scoring without
manually labeled normal/abnormal data. Experiments on eight real-world video
scenes show that our proposed method outperforms state-of-the-art methods that
require no labeled training data by a substantial margin, and enables easy and
accurate localization of the identified anomalies. Furthermore, we demonstrate
that our method offers effective human-in-the-loop anomaly detection which can
be critical in applications where anomalies are rare and the false-negative
cost is high.
Related papers
- Dynamic Distinction Learning: Adaptive Pseudo Anomalies for Video Anomaly Detection [8.957579200590985]
We introduce Dynamic Distinction Learning (DDL) for Video Anomaly Detection.
DDL combines pseudo-anomalies, dynamic anomaly weighting, and a distinction loss function to improve detection accuracy.
Our approach adapts to the variability of normal and anomalous behaviors without fixed anomaly thresholds.
arXiv Detail & Related papers (2024-04-07T15:06:48Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - SaliencyCut: Augmenting Plausible Anomalies for Anomaly Detection [24.43321988051129]
We propose a novel saliency-guided data augmentation method, SaliencyCut, to produce pseudo but more common anomalies.
We then design a novel patch-wise residual module in the anomaly learning head to extract and assess the fine-grained anomaly features from each sample.
arXiv Detail & Related papers (2023-06-14T08:55:36Z) - Object-centric and memory-guided normality reconstruction for video
anomaly detection [56.64792194894702]
This paper addresses anomaly detection problem for videosurveillance.
Due to the inherent rarity and heterogeneity of abnormal events, the problem is viewed as a normality modeling strategy.
Our model learns object-centric normal patterns without seeing anomalous samples during training.
arXiv Detail & Related papers (2022-03-07T19:28:39Z) - Anomaly Crossing: A New Method for Video Anomaly Detection as
Cross-domain Few-shot Learning [32.0713939637202]
Video anomaly detection aims to identify abnormal events that occurred in videos.
Most previous approaches learn only from normal videos using unsupervised or semi-supervised methods.
We propose a new learning paradigm by making full use of both normal and abnormal videos for video anomaly detection.
arXiv Detail & Related papers (2021-12-12T20:49:38Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Weakly Supervised Video Anomaly Detection via Center-guided
Discriminative Learning [25.787860059872106]
Anomaly detection in surveillance videos is a challenging task due to the diversity of anomalous video content and duration.
We propose an anomaly detection framework, called Anomaly Regression Net (AR-Net), which only requires video-level labels in training stage.
Our method yields a new state-of-the-art result for video anomaly detection on ShanghaiTech dataset.
arXiv Detail & Related papers (2021-04-15T06:41:23Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z) - Unsupervised Video Anomaly Detection via Normalizing Flows with Implicit
Latent Features [8.407188666535506]
Most existing methods use an autoencoder to learn to reconstruct normal videos.
We propose an implicit two-path AE (ITAE), a structure in which two encoders implicitly model appearance and motion features.
For the complex distribution of normal scenes, we suggest normal density estimation of ITAE features.
NF models intensify ITAE performance by learning normality through implicitly learned features.
arXiv Detail & Related papers (2020-10-15T05:02:02Z) - A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video [120.18562044084678]
Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years.
We propose a background-agnostic framework that learns from training videos containing only normal events.
arXiv Detail & Related papers (2020-08-27T18:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.