Updated version: A Video Anomaly Detection Framework based on
Appearance-Motion Semantics Representation Consistency
- URL: http://arxiv.org/abs/2303.05109v1
- Date: Thu, 9 Mar 2023 08:28:34 GMT
- Title: Updated version: A Video Anomaly Detection Framework based on
Appearance-Motion Semantics Representation Consistency
- Authors: Xiangyu Huang, Caidan Zhao and Zhiqiang Wu
- Abstract summary: We propose a framework of Appearance-Motion Semantics Consistency Representation.
The two-stream structure is designed to encode the appearance and motion information representation of normal samples.
A novel consistency loss is proposed to enhance the consistency of feature semantics so that anomalies with low consistency can be identified.
- Score: 2.395616571632115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video anomaly detection is an essential but challenging task. The prevalent
methods mainly investigate the reconstruction difference between normal and
abnormal patterns but ignore the semantics consistency between appearance and
motion information of behavior patterns, making the results highly dependent on
the local context of frame sequences and lacking the understanding of behavior
semantics. To address this issue, we propose a framework of Appearance-Motion
Semantics Representation Consistency that uses the gap of appearance and motion
semantic representation consistency between normal and abnormal data. The
two-stream structure is designed to encode the appearance and motion
information representation of normal samples, and a novel consistency loss is
proposed to enhance the consistency of feature semantics so that anomalies with
low consistency can be identified. Moreover, the lower consistency features of
anomalies can be used to deteriorate the quality of the predicted frame, which
makes anomalies easier to spot. Experimental results demonstrate the
effectiveness of the proposed method.
Related papers
- Enhancing Anomaly Detection via Generating Diversified and Hard-to-distinguish Synthetic Anomalies [7.021105583098609]
Recent approaches have focused on leveraging domain-specific transformations or perturbations to generate synthetic anomalies from normal samples.
We introduce a novel domain-agnostic method that employs a set of conditional perturbators and a discriminator.
We demonstrate the superiority of our method over state-of-the-art benchmarks.
arXiv Detail & Related papers (2024-09-16T08:15:23Z) - Generating and Reweighting Dense Contrastive Patterns for Unsupervised
Anomaly Detection [59.34318192698142]
We introduce a prior-less anomaly generation paradigm and develop an innovative unsupervised anomaly detection framework named GRAD.
PatchDiff effectively expose various types of anomaly patterns.
experiments on both MVTec AD and MVTec LOCO datasets also support the aforementioned observation.
arXiv Detail & Related papers (2023-12-26T07:08:06Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - Spatio-temporal predictive tasks for abnormal event detection in videos [60.02503434201552]
We propose new constrained pretext tasks to learn object level normality patterns.
Our approach consists in learning a mapping between down-scaled visual queries and their corresponding normal appearance and motion characteristics.
Experiments on several benchmark datasets demonstrate the effectiveness of our approach to localize and track anomalies.
arXiv Detail & Related papers (2022-10-27T19:45:12Z) - A Video Anomaly Detection Framework based on Appearance-Motion Semantics
Representation Consistency [18.06814233420315]
We propose a framework that uses normal data's appearance and motion semantic representation consistency to handle anomaly detection.
We design a two-stream encoder to encode the appearance and motion information representations of normal samples.
Lower consistency of appearance and motion features of anomalous samples can be used to generate predicted frames with larger reconstruction error.
arXiv Detail & Related papers (2022-04-08T15:59:57Z) - Object-centric and memory-guided normality reconstruction for video
anomaly detection [56.64792194894702]
This paper addresses anomaly detection problem for videosurveillance.
Due to the inherent rarity and heterogeneity of abnormal events, the problem is viewed as a normality modeling strategy.
Our model learns object-centric normal patterns without seeing anomalous samples during training.
arXiv Detail & Related papers (2022-03-07T19:28:39Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Video Anomaly Detection By The Duality Of Normality-Granted Optical Flow [1.8065361710947974]
We propose to discriminate anomalies from normal ones by the duality of normality-granted optical flow.
We extend the appearance-motion correspondence scheme from frame reconstruction to prediction.
arXiv Detail & Related papers (2021-05-10T12:25:00Z) - A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video [120.18562044084678]
Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years.
We propose a background-agnostic framework that learns from training videos containing only normal events.
arXiv Detail & Related papers (2020-08-27T18:39:24Z) - Localizing Anomalies from Weakly-Labeled Videos [45.58643708315132]
We propose a WeaklySupervised Anomaly localization (WSAL) method focusing on temporally localizing anomalous segments within anomalous videos.
Inspired by the appearance difference in anomalous videos, the evolution of adjacent temporal segments is evaluated for the localization of anomalous segments.
Our proposed method achieves new state-of-the-art performance on the UCF-Crime and TAD datasets.
arXiv Detail & Related papers (2020-08-20T12:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.