Hierarchical Semantic Contrast for Scene-aware Video Anomaly Detection
- URL: http://arxiv.org/abs/2303.13051v1
- Date: Thu, 23 Mar 2023 05:53:34 GMT
- Title: Hierarchical Semantic Contrast for Scene-aware Video Anomaly Detection
- Authors: Shengyang Sun, Xiaojin Gong
- Abstract summary: We propose a hierarchical semantic contrast (HSC) method to learn a scene-aware VAD model from normal videos.
This hierarchical semantic contrast strategy helps to deal with the diversity of normal patterns and also increases their discrimination ability.
- Score: 14.721615285883423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing scene-awareness is a key challenge in video anomaly detection
(VAD). In this work, we propose a hierarchical semantic contrast (HSC) method
to learn a scene-aware VAD model from normal videos. We first incorporate
foreground object and background scene features with high-level semantics by
taking advantage of pre-trained video parsing models. Then, building upon the
autoencoder-based reconstruction framework, we introduce both scene-level and
object-level contrastive learning to enforce the encoded latent features to be
compact within the same semantic classes while being separable across different
classes. This hierarchical semantic contrast strategy helps to deal with the
diversity of normal patterns and also increases their discrimination ability.
Moreover, for the sake of tackling rare normal activities, we design a
skeleton-based motion augmentation to increase samples and refine the model
further. Extensive experiments on three public datasets and scene-dependent
mixture datasets validate the effectiveness of our proposed method.
Related papers
- SIGMA:Sinkhorn-Guided Masked Video Modeling [69.31715194419091]
Sinkhorn-guided Masked Video Modelling ( SIGMA) is a novel video pretraining method.
We distribute features of space-time tubes evenly across a limited number of learnable clusters.
Experimental results on ten datasets validate the effectiveness of SIGMA in learning more performant, temporally-aware, and robust video representations.
arXiv Detail & Related papers (2024-07-22T08:04:09Z) - Object-level Scene Deocclusion [92.39886029550286]
We present a new self-supervised PArallel visible-to-COmplete diffusion framework, named PACO, for object-level scene deocclusion.
To train PACO, we create a large-scale dataset with 500k samples to enable self-supervised learning.
Experiments on COCOA and various real-world scenes demonstrate the superior capability of PACO for scene deocclusion, surpassing the state of the arts by a large margin.
arXiv Detail & Related papers (2024-06-11T20:34:10Z) - Building a Strong Pre-Training Baseline for Universal 3D Large-Scale Perception [41.77153804695413]
An effective pre-training framework with universal 3D representations is extremely desired in perceiving large-scale dynamic scenes.
We propose a CSC framework that puts a scene-level semantic consistency in the heart, bridging the connection of the similar semantic segments across various scenes.
arXiv Detail & Related papers (2024-05-12T07:58:52Z) - Bilevel Fast Scene Adaptation for Low-Light Image Enhancement [50.639332885989255]
Enhancing images in low-light scenes is a challenging but widely concerned task in the computer vision.
Main obstacle lies in the modeling conundrum from distribution discrepancy across different scenes.
We introduce the bilevel paradigm to model the above latent correspondence.
A bilevel learning framework is constructed to endow the scene-irrelevant generality of the encoder towards diverse scenes.
arXiv Detail & Related papers (2023-06-02T08:16:21Z) - Spatio-Temporal Relation Learning for Video Anomaly Detection [35.59510027883497]
Anomaly identification is highly dependent on the relationship between the object and the scene.
In this paper, we propose a Spatial-Temporal Relation Learning framework to tackle the video anomaly detection task.
Experiments are conducted on three public datasets, and the superior performance over the state-of-the-art methods demonstrates the effectiveness of our method.
arXiv Detail & Related papers (2022-09-27T02:19:31Z) - Rethinking Multi-Modal Alignment in Video Question Answering from
Feature and Sample Perspectives [30.666823939595627]
This paper reconsiders the multi-modal alignment problem in VideoQA from feature and sample perspectives.
We adopt a heterogeneous graph architecture and design a hierarchical framework to align both trajectory-level and frame-level visual feature with language feature.
Our method outperforms all the state-of-the-art models on the challenging NExT-QA benchmark.
arXiv Detail & Related papers (2022-04-25T10:42:07Z) - In-N-Out Generative Learning for Dense Unsupervised Video Segmentation [89.21483504654282]
In this paper, we focus on the unsupervised Video Object (VOS) task which learns visual correspondence from unlabeled videos.
We propose the In-aNd-Out (INO) generative learning from a purely generative perspective, which captures both high-level and fine-grained semantics.
Our INO outperforms previous state-of-the-art methods by significant margins.
arXiv Detail & Related papers (2022-03-29T07:56:21Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z) - Learning to Associate Every Segment for Video Panoptic Segmentation [123.03617367709303]
We learn coarse segment-level matching and fine pixel-level matching together.
We show that our per-frame computation model can achieve new state-of-the-art results on Cityscapes-VPS and VIPER datasets.
arXiv Detail & Related papers (2021-06-17T13:06:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.