A Lightweight Dual-Branch System for Weakly-Supervised Video Anomaly Detection on Consumer Edge Devices
- URL: http://arxiv.org/abs/2410.21991v7
- Date: Fri, 06 Jun 2025 17:04:26 GMT
- Title: A Lightweight Dual-Branch System for Weakly-Supervised Video Anomaly Detection on Consumer Edge Devices
- Authors: Wen-Dong Jiang, Chih-Yung Chang, Ssu-Chi Kuai, Diptendu Sinha Roy,
- Abstract summary: Rule-based Video Anomaly Detection (RuleVAD) is a novel, lightweight system engineered for high-efficiency and low-complexity threat detection on consumer hardware.<n>An implicit branch uses visual features for rapid, coarse-grained binary classification, efficiently filtering out normal activity to avoid unnecessary processing.<n>A multimodal explicit branch takes over, applying data mining to generate interpretable, text-based association rules from the scene.
- Score: 1.8274323268621635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing demand for intelligent security in consumer electronics, such as smart home cameras and personal monitoring systems, is often hindered by the high computational cost and large model sizes of advanced AI. These limitations prevent the effective deployment of real-time Video Anomaly Detection (VAD) on resource-constrained edge devices. To bridge this gap, this paper introduces Rule-based Video Anomaly Detection (RuleVAD), a novel, lightweight system engineered for high-efficiency and low-complexity threat detection directly on consumer hardware. RuleVAD features an innovative decoupled dual-branch architecture to minimize computational load. An implicit branch uses visual features for rapid, coarse-grained binary classification, efficiently filtering out normal activity to avoid unnecessary processing. For potentially anomalous or complex events, a multimodal explicit branch takes over. This branch leverages YOLO-World to detect objects and applies data mining to generate interpretable, text-based association rules from the scene. By aligning these rules with visual data, RuleVAD achieves a more nuanced, fine-grained classification, significantly reducing the false alarms common in vision-only systems. Extensive experiments on the XD-Violence and UCF-Crime benchmark datasets show that RuleVAD achieves superior performance, surpassing existing state-of-the-art methods in both accuracy and speed. Crucially, the entire system is optimized for low-power operation and is fully deployable on an NVIDIA Jetson Nano board, demonstrating its practical feasibility for bringing advanced, real-time security monitoring to everyday consumer electronic devices.
Related papers
- DGE-YOLO: Dual-Branch Gathering and Attention for Accurate UAV Object Detection [0.46040036610482665]
We present DGE-YOLO, an enhanced YOLO-based detection framework designed to effectively fuse multi-modal information.<n> Specifically, we introduce a dual-branch architecture for modality-specific feature extraction, enabling the model to process both infrared and visible images.<n>To further enrich semantic representation, we propose an Efficient Multi-scale Attention (EMA) mechanism that enhances feature learning across spatial scales.
arXiv Detail & Related papers (2025-06-29T14:19:18Z) - SlowFastVAD: Video Anomaly Detection via Integrating Simple Detector and RAG-Enhanced Vision-Language Model [52.47816604709358]
Video anomaly detection (VAD) aims to identify unexpected events in videos and has wide applications in safety-critical domains.<n> vision-language models (VLMs) have demonstrated strong multimodal reasoning capabilities, offering new opportunities for anomaly detection.<n>We propose SlowFastVAD, a hybrid framework that integrates a fast anomaly detector with a slow anomaly detector.
arXiv Detail & Related papers (2025-04-14T15:30:03Z) - Weakly-Supervised Anomaly Detection in Surveillance Videos Based on Two-Stream I3D Convolution Network [2.209921757303168]
This paper presents a significant advancement in the field of anomaly detection through the application of Two-Stream Inflated 3D (I3D) Convolutional Networks.
Our research advances the field by implementing a weakly supervised learning framework based on Multiple Instance Learning (MIL)
This paper contributes significantly to the field of computer vision by delivering a more adaptable, efficient, and context-aware anomaly detection system.
arXiv Detail & Related papers (2024-11-13T16:33:27Z) - ORCHID: Streaming Threat Detection over Versioned Provenance Graphs [11.783370157959968]
We present ORCHID, a novel Prov-IDS that performs fine-grained detection of process-level threats over a real time event stream.
ORCHID takes advantage of the unique immutable properties of a versioned provenance graphs to iteratively embed the entire graph in a sequential RNN model.
We evaluate ORCHID on four public datasets, including DARPA TC, to show that ORCHID can provide competitive classification performance.
arXiv Detail & Related papers (2024-08-23T19:44:40Z) - Weakly Supervised Video Anomaly Detection and Localization with Spatio-Temporal Prompts [57.01985221057047]
This paper introduces a novel method that learnstemporal prompt embeddings for weakly supervised video anomaly detection and localization (WSVADL) based on pre-trained vision-language models (VLMs)
Our method achieves state-of-theart performance on three public benchmarks for the WSVADL task.
arXiv Detail & Related papers (2024-08-12T03:31:29Z) - DVF: Advancing Robust and Accurate Fine-Grained Image Retrieval with Retrieval Guidelines [67.44394651662738]
Fine-grained image retrieval (FGIR) is to learn visual representations that distinguish visually similar objects while maintaining generalization.
Existing methods propose to generate discriminative features, but rarely consider the particularity of the FGIR task itself.
This paper proposes practical guidelines to identify subcategory-specific discrepancies and generate discriminative features to design effective FGIR models.
arXiv Detail & Related papers (2024-04-24T09:45:12Z) - LEAF: Unveiling Two Sides of the Same Coin in Semi-supervised Facial Expression Recognition [56.22672276092373]
Semi-supervised learning has emerged as a promising approach to tackle the challenge of label scarcity in facial expression recognition.
We propose a unified framework termed hierarchicaL dEcoupling And Fusing to coordinate expression-relevant representations and pseudo-labels.
We show that LEAF outperforms state-of-the-art semi-supervised FER methods, effectively leveraging both labeled and unlabeled data.
arXiv Detail & Related papers (2024-04-23T13:43:33Z) - DMAD: Dual Memory Bank for Real-World Anomaly Detection [90.97573828481832]
We propose a new framework named Dual Memory bank enhanced representation learning for Anomaly Detection (DMAD)
DMAD employs a dual memory bank to calculate feature distance and feature attention between normal and abnormal patterns.
We evaluate DMAD on the MVTec-AD and VisA datasets.
arXiv Detail & Related papers (2024-03-19T02:16:32Z) - Exploring Pre-trained Text-to-Video Diffusion Models for Referring Video Object Segmentation [72.90144343056227]
We explore the visual representations produced from a pre-trained text-to-video (T2V) diffusion model for video understanding tasks.
We introduce a novel framework, termed "VD-IT", tailored with dedicatedly designed components built upon a fixed T2V model.
Our VD-IT achieves highly competitive results, surpassing many existing state-of-the-art methods.
arXiv Detail & Related papers (2024-03-18T17:59:58Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - A Coarse-to-Fine Pseudo-Labeling (C2FPL) Framework for Unsupervised
Video Anomaly Detection [4.494911384096143]
Detection of anomalous events in videos is an important problem in applications such as surveillance.
We propose a simple-but-effective two-stage pseudo-label generation framework that produces segment-level (normal/anomaly) pseudo-labels.
The proposed coarse-to-fine pseudo-label generator employs carefully-designed hierarchical divisive clustering and statistical hypothesis testing.
arXiv Detail & Related papers (2023-10-26T17:59:19Z) - VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video
Anomaly Detection [58.47940430618352]
We propose VadCLIP, a new paradigm for weakly supervised video anomaly detection (WSVAD)
VadCLIP makes full use of fine-grained associations between vision and language on the strength of CLIP.
We conduct extensive experiments on two commonly-used benchmarks, demonstrating that VadCLIP achieves the best performance on both coarse-grained and fine-grained WSVAD.
arXiv Detail & Related papers (2023-08-22T14:58:36Z) - Towards Video Anomaly Retrieval from Video Anomaly Detection: New
Benchmarks and Model [70.97446870672069]
Video anomaly detection (VAD) has been paid increasing attention due to its potential applications.
Video Anomaly Retrieval ( VAR) aims to pragmatically retrieve relevant anomalous videos by cross-modalities.
We present two benchmarks, UCFCrime-AR and XD-Violence, constructed on top of prevalent anomaly datasets.
arXiv Detail & Related papers (2023-07-24T06:22:37Z) - Unsupervised Video Anomaly Detection with Diffusion Models Conditioned
on Compact Motion Representations [17.816344808780965]
unsupervised video anomaly detection (VAD) problem involves classifying each frame in a video as normal or abnormal, without any access to labels.
To accomplish this, proposed method employs conditional diffusion models, where the input data is features extracted from pre-trained network.
Our method utilizes a data-driven threshold and considers a high reconstruction error as an indicator of anomalous events.
arXiv Detail & Related papers (2023-07-04T07:36:48Z) - Unsupervised Learning of Structured Representations via Closed-Loop
Transcription [21.78655495464155]
This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes.
We show that a unified representation can enjoy the mutual benefits of having both.
These structured representations enable classification close to state-of-the-art unsupervised discriminative representations.
arXiv Detail & Related papers (2022-10-30T09:09:05Z) - Modality-Aware Contrastive Instance Learning with Self-Distillation for
Weakly-Supervised Audio-Visual Violence Detection [14.779452690026144]
We propose a modality-aware contrastive instance learning with self-distillation (MACIL-SD) strategy for weakly-supervised audio-visual learning.
Our framework outperforms previous methods with lower complexity on the large-scale XD-Violence dataset.
arXiv Detail & Related papers (2022-07-12T12:42:21Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Fine-grained Temporal Contrastive Learning for Weakly-supervised
Temporal Action Localization [87.47977407022492]
This paper argues that learning by contextually comparing sequence-to-sequence distinctions offers an essential inductive bias in weakly-supervised action localization.
Under a differentiable dynamic programming formulation, two complementary contrastive objectives are designed, including Fine-grained Sequence Distance (FSD) contrasting and Longest Common Subsequence (LCS) contrasting.
Our method achieves state-of-the-art performance on two popular benchmarks.
arXiv Detail & Related papers (2022-03-31T05:13:50Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Meta-UDA: Unsupervised Domain Adaptive Thermal Object Detection using
Meta-Learning [64.92447072894055]
Infrared (IR) cameras are robust under adverse illumination and lighting conditions.
We propose an algorithm meta-learning framework to improve existing UDA methods.
We produce a state-of-the-art thermal detector for the KAIST and DSIAC datasets.
arXiv Detail & Related papers (2021-10-07T02:28:18Z) - Weakly-Supervised Spatio-Temporal Anomaly Detection in Surveillance
Video [128.41392860714635]
We introduce Weakly-Supervised Snoma-Temporally Detection (WSSTAD) in surveillance video.
WSSTAD aims to localize a-temporal tube (i.e. sequence of bounding boxes at consecutive times) that encloses abnormal event.
We propose a dual-branch network which takes as input proposals with multi-granularities in both spatial-temporal domains.
arXiv Detail & Related papers (2021-08-09T06:11:14Z) - An Efficient One-Class SVM for Anomaly Detection in the Internet of
Things [25.78558553080511]
Insecure Internet of things (IoT) devices pose significant threats to critical infrastructure and the Internet at large.
detecting anomalous behavior from these devices remains of critical importance.
One-Class Support Vector Machines (OCSVM) are one of the state-of-the-art approaches for novelty detection.
arXiv Detail & Related papers (2021-04-22T15:59:56Z) - MIST: Multiple Instance Self-Training Framework for Video Anomaly
Detection [76.80153360498797]
We develop a multiple instance self-training framework (MIST) to efficiently refine task-specific discriminative representations.
MIST is composed of 1) a multiple instance pseudo label generator, which adapts a sparse continuous sampling strategy to produce more reliable clip-level pseudo labels, and 2) a self-guided attention boosted feature encoder.
Our method performs comparably to or even better than existing supervised and weakly supervised methods, specifically obtaining a frame-level AUC 94.83% on ShanghaiTech.
arXiv Detail & Related papers (2021-04-04T15:47:14Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Unsupervised Video Anomaly Detection via Normalizing Flows with Implicit
Latent Features [8.407188666535506]
Most existing methods use an autoencoder to learn to reconstruct normal videos.
We propose an implicit two-path AE (ITAE), a structure in which two encoders implicitly model appearance and motion features.
For the complex distribution of normal scenes, we suggest normal density estimation of ITAE features.
NF models intensify ITAE performance by learning normality through implicitly learned features.
arXiv Detail & Related papers (2020-10-15T05:02:02Z) - A Self-Reasoning Framework for Anomaly Detection Using Video-Level
Labels [17.615297975503648]
Alous event detection in surveillance videos is a challenging and practical research problem among image and video processing community.
We propose a weakly supervised anomaly detection framework based on deep neural networks which is trained in a self-reasoning fashion using only video-level labels.
The proposed framework has been evaluated on publicly available real-world anomaly detection datasets including UCF-crime, ShanghaiTech and Ped2.
arXiv Detail & Related papers (2020-08-27T02:14:15Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.