A Hybrid Video Anomaly Detection Framework via Memory-Augmented Flow
Reconstruction and Flow-Guided Frame Prediction
- URL: http://arxiv.org/abs/2108.06852v1
- Date: Mon, 16 Aug 2021 01:37:29 GMT
- Title: A Hybrid Video Anomaly Detection Framework via Memory-Augmented Flow
Reconstruction and Flow-Guided Frame Prediction
- Authors: Zhian Liu, Yongwei Nie, Chengjiang Long, Qing Zhang, Guiqing Li
- Abstract summary: $textHF2$-VAD is a Hybrid framework that integrates Flow reconstruction and Frame prediction.
We design the network of ML-MemAE-SC to memorize normal patterns for optical flow reconstruction.
We then employ a Conditional Variational Autoencoder to predict the next frame given several previous frames.
- Score: 34.565986275769745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose $\text{HF}^2$-VAD, a Hybrid framework that
integrates Flow reconstruction and Frame prediction seamlessly to handle Video
Anomaly Detection. Firstly, we design the network of ML-MemAE-SC (Multi-Level
Memory modules in an Autoencoder with Skip Connections) to memorize normal
patterns for optical flow reconstruction so that abnormal events can be
sensitively identified with larger flow reconstruction errors. More
importantly, conditioned on the reconstructed flows, we then employ a
Conditional Variational Autoencoder (CVAE), which captures the high correlation
between video frame and optical flow, to predict the next frame given several
previous frames. By CVAE, the quality of flow reconstruction essentially
influences that of frame prediction. Therefore, poorly reconstructed optical
flows of abnormal events further deteriorate the quality of the final predicted
future frame, making the anomalies more detectable. Experimental results
demonstrate the effectiveness of the proposed method. Code is available at
\href{https://github.com/LiUzHiAn/hf2vad}{https://github.com/LiUzHiAn/hf2vad}.
Related papers
- OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Enhanced Event-Based Video Reconstruction with Motion Compensation [26.03328887451797]
We propose warping the input intensity frames and sparse codes to enhance reconstruction quality.
A CISTA-Flow network is constructed by integrating a flow network with CISTA-LSTC for motion compensation.
Results demonstrate that our approach achieves state-of-the-art reconstruction accuracy and simultaneously provides reliable dense flow estimation.
arXiv Detail & Related papers (2024-03-18T16:58:23Z) - E2HQV: High-Quality Video Generation from Event Camera via
Theory-Inspired Model-Aided Deep Learning [53.63364311738552]
Bio-inspired event cameras or dynamic vision sensors are capable of capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range.
It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization.
We propose textbfE2HQV, a novel E2V paradigm designed to produce high-quality video frames from events.
arXiv Detail & Related papers (2024-01-16T05:10:50Z) - AccFlow: Backward Accumulation for Long-Range Optical Flow [70.4251045372285]
This paper proposes a novel recurrent framework called AccFlow for long-range optical flow estimation.
We demonstrate the superiority of backward accumulation over conventional forward accumulation.
Experiments validate the effectiveness of AccFlow in handling long-range optical flow estimation.
arXiv Detail & Related papers (2023-08-25T01:51:26Z) - Making Reconstruction-based Method Great Again for Video Anomaly
Detection [64.19326819088563]
Anomaly detection in videos is a significant yet challenging problem.
Existing reconstruction-based methods rely on old-fashioned convolutional autoencoders.
We propose a new autoencoder model for enhanced consecutive frame reconstruction.
arXiv Detail & Related papers (2023-01-28T01:57:57Z) - Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video
Restoration [85.3323211054274]
How to properly model the inter-frame relation within the video sequence is an important but unsolved challenge for video restoration (VR)
In this work, we propose an unsupervised flow-aligned sequence-to-sequence model (S2SVR) to address this problem.
S2SVR shows superior performance in multiple VR tasks, including video deblurring, video super-resolution, and compressed video quality enhancement.
arXiv Detail & Related papers (2022-05-20T14:14:48Z) - FDAN: Flow-guided Deformable Alignment Network for Video
Super-Resolution [12.844337773258678]
Flow-guided Deformable Module (FDM) is proposed to integrate optical flow into deformable convolution.
FDAN reaches the state-of-the-art performance on two benchmark datasets.
arXiv Detail & Related papers (2021-05-12T13:18:36Z) - Anomaly Detection with Prototype-Guided Discriminative Latent Embeddings [29.93983580779689]
We present a novel approach for anomaly detection, which utilizes discriminative prototypes of normal data to reconstruct video frames.
In this way, the model will favor the reconstruction of normal events and distort the reconstruction of abnormal events.
We evaluate the effectiveness of our method on three benchmark datasets and experimental results demonstrate the proposed method outperforms the state-of-the-art.
arXiv Detail & Related papers (2021-04-30T12:16:52Z) - PDWN: Pyramid Deformable Warping Network for Video Interpolation [11.62213584807003]
We propose a light but effective model, called Pyramid Deformable Warping Network (PDWN)
PDWN uses a pyramid structure to generate DConv offsets of the unknown middle frame with respect to the known frames through coarse-to-fine successive refinements.
Our method achieves better or on-par accuracy compared to state-of-the-art models on multiple datasets.
arXiv Detail & Related papers (2021-04-04T02:08:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.