CHAD: Charlotte Anomaly Dataset
- URL: http://arxiv.org/abs/2212.09258v3
- Date: Thu, 1 Jun 2023 19:21:20 GMT
- Title: CHAD: Charlotte Anomaly Dataset
- Authors: Armin Danesh Pazho, Ghazal Alinezhad Noghre, Babak Rahimi Ardabili,
Christopher Neff, Hamed Tabkhi
- Abstract summary: We present the Charlotte Anomaly dataset (CHAD) for video anomaly detection.
CHAD is the first anomaly dataset to include bounding box, identity, and pose annotations for each actor.
With four camera views and over 1.15 million frames, CHAD is the largest fully annotated anomaly detection dataset.
- Score: 2.6774008509840996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, we have seen a significant interest in data-driven deep
learning approaches for video anomaly detection, where an algorithm must
determine if specific frames of a video contain abnormal behaviors. However,
video anomaly detection is particularly context-specific, and the availability
of representative datasets heavily limits real-world accuracy. Additionally,
the metrics currently reported by most state-of-the-art methods often do not
reflect how well the model will perform in real-world scenarios. In this
article, we present the Charlotte Anomaly Dataset (CHAD). CHAD is a
high-resolution, multi-camera anomaly dataset in a commercial parking lot
setting. In addition to frame-level anomaly labels, CHAD is the first anomaly
dataset to include bounding box, identity, and pose annotations for each actor.
This is especially beneficial for skeleton-based anomaly detection, which is
useful for its lower computational demand in real-world settings. CHAD is also
the first anomaly dataset to contain multiple views of the same scene. With
four camera views and over 1.15 million frames, CHAD is the largest fully
annotated anomaly detection dataset including person annotations, collected
from continuous video streams from stationary cameras for smart video
surveillance applications. To demonstrate the efficacy of CHAD for training and
evaluation, we benchmark two state-of-the-art skeleton-based anomaly detection
algorithms on CHAD and provide comprehensive analysis, including both
quantitative results and qualitative examination. The dataset is available at
https://github.com/TeCSAR-UNCC/CHAD.
Related papers
- VANE-Bench: Video Anomaly Evaluation Benchmark for Conversational LMMs [64.60035916955837]
VANE-Bench is a benchmark designed to assess the proficiency of Video-LMMs in detecting anomalies and inconsistencies in videos.
Our dataset comprises an array of videos synthetically generated using existing state-of-the-art text-to-video generation models.
We evaluate nine existing Video-LMMs, both open and closed sources, on this benchmarking task and find that most of the models encounter difficulties in effectively identifying the subtle anomalies.
arXiv Detail & Related papers (2024-06-14T17:59:01Z) - ARC: A Generalist Graph Anomaly Detector with In-Context Learning [62.202323209244]
ARC is a generalist GAD approach that enables a one-for-all'' GAD model to detect anomalies across various graph datasets on-the-fly.
equipped with in-context learning, ARC can directly extract dataset-specific patterns from the target dataset.
Extensive experiments on multiple benchmark datasets from various domains demonstrate the superior anomaly detection performance, efficiency, and generalizability of ARC.
arXiv Detail & Related papers (2024-05-27T02:42:33Z) - Dynamic Erasing Network Based on Multi-Scale Temporal Features for
Weakly Supervised Video Anomaly Detection [103.92970668001277]
We propose a Dynamic Erasing Network (DE-Net) for weakly supervised video anomaly detection.
We first propose a multi-scale temporal modeling module, capable of extracting features from segments of varying lengths.
Then, we design a dynamic erasing strategy, which dynamically assesses the completeness of the detected anomalies.
arXiv Detail & Related papers (2023-12-04T09:40:11Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - A New Comprehensive Benchmark for Semi-supervised Video Anomaly
Detection and Anticipation [46.687762316415096]
We propose a new comprehensive dataset, NWPU Campus, containing 43 scenes, 28 classes of abnormal events, and 16 hours of videos.
It is the largest semi-supervised VAD dataset with the largest number of scenes and classes of anomalies, the longest duration, and the only one considering the scene-dependent anomaly.
We propose a novel model capable of detecting and anticipating anomalous events simultaneously.
arXiv Detail & Related papers (2023-05-23T02:20:12Z) - Understanding the Challenges and Opportunities of Pose-based Anomaly
Detection [2.924868086534434]
Pose-based anomaly detection is a video-analysis technique for detecting anomalous events or behaviors by examining human pose extracted from the video frames.
In this work, we analyze and quantify the characteristics of two well-known video anomaly datasets to better understand the difficulties of pose-based anomaly detection.
We believe these experiments are beneficial for a better comprehension of pose-based anomaly detection and the datasets currently available.
arXiv Detail & Related papers (2023-03-09T18:09:45Z) - Adaptive graph convolutional networks for weakly supervised anomaly
detection in videos [42.3118758940767]
We propose a weakly supervised adaptive graph convolutional network (WAGCN) to model the contextual relationships among video segments.
We fully consider the influence of other video segments on the current segment when generating the anomaly probability score for each segment.
arXiv Detail & Related papers (2022-02-14T06:31:34Z) - Anomaly Detection in Video Sequences: A Benchmark and Computational
Model [25.25968958782081]
We contribute a new Large-scale Anomaly Detection (LAD) database as the benchmark for anomaly detection in video sequences.
It contains 2000 video sequences including normal and abnormal video clips with 14 anomaly categories including crash, fire, violence, etc.
It provides the annotation data, including video-level labels (abnormal/normal video, anomaly type) and frame-level labels (abnormal/normal video frame) to facilitate anomaly detection.
We propose a multi-task deep neural network to solve anomaly detection as a fully-supervised learning problem.
arXiv Detail & Related papers (2021-06-16T06:34:38Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z) - Unsupervised Video Anomaly Detection via Normalizing Flows with Implicit
Latent Features [8.407188666535506]
Most existing methods use an autoencoder to learn to reconstruct normal videos.
We propose an implicit two-path AE (ITAE), a structure in which two encoders implicitly model appearance and motion features.
For the complex distribution of normal scenes, we suggest normal density estimation of ITAE features.
NF models intensify ITAE performance by learning normality through implicitly learned features.
arXiv Detail & Related papers (2020-10-15T05:02:02Z) - A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video [120.18562044084678]
Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years.
We propose a background-agnostic framework that learns from training videos containing only normal events.
arXiv Detail & Related papers (2020-08-27T18:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.