TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video
Surveillance
- URL: http://arxiv.org/abs/2209.12386v1
- Date: Mon, 26 Sep 2022 03:00:50 GMT
- Title: TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video
Surveillance
- Authors: Yajun Xu, Chuwen Huang, Yibing Nan, Shiguo Lian
- Abstract summary: Existing datasets in traffic accidents are either small-scale, not from surveillance cameras, not open-sourced, or not built for freeway scenes.
After integration and annotation by various dimensions, a large-scale traffic accidents dataset named TAD is proposed in this work.
- Score: 2.1076255329439304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic traffic accidents detection has appealed to the machine vision
community due to its implications on the development of autonomous intelligent
transportation systems (ITS) and importance to traffic safety. Most previous
studies on efficient analysis and prediction of traffic accidents, however,
have used small-scale datasets with limited coverage, which limits their effect
and applicability. Existing datasets in traffic accidents are either
small-scale, not from surveillance cameras, not open-sourced, or not built for
freeway scenes. Since accidents happened in freeways tend to cause serious
damage and are too fast to catch the spot. An open-sourced datasets targeting
on freeway traffic accidents collected from surveillance cameras is in great
need and of practical importance. In order to help the vision community address
these shortcomings, we endeavor to collect video data of real traffic accidents
that covered abundant scenes. After integration and annotation by various
dimensions, a large-scale traffic accidents dataset named TAD is proposed in
this work. Various experiments on image classification, object detection, and
video classification tasks, using public mainstream vision algorithms or
frameworks are conducted in this work to demonstrate performance of different
methods. The proposed dataset together with the experimental results are
presented as a new benchmark to improve computer vision research, especially in
ITS.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Application of 2D Homography for High Resolution Traffic Data Collection
using CCTV Cameras [9.946460710450319]
This study implements a three-stage video analytics framework for extracting high-resolution traffic data from CCTV cameras.
The key components of the framework include object recognition, perspective transformation, and vehicle trajectory reconstruction.
The results of the study showed about +/- 4.5% error rate for directional traffic counts, less than 10% MSE for speed bias between camera estimates.
arXiv Detail & Related papers (2024-01-14T07:33:14Z) - Smart City Transportation: Deep Learning Ensemble Approach for Traffic
Accident Detection [0.0]
We introduce the I3D-CONVLSTM2D model architecture, a lightweight solution tailored explicitly for accident detection in smart city traffic surveillance systems.
Our experimental study's empirical analysis underscores our approach's efficacy, with the I3D-CONVLSTM2D RGB + Optical-Flow (Trainable) model outperforming its counterparts, achieving an impressive 87% Mean Average Precision (MAP)
Our research illuminates the path towards a sophisticated vision-based accident detection system primed for real-time integration into edge IoT devices within smart urban infrastructures.
arXiv Detail & Related papers (2023-10-16T03:47:08Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - A Memory-Augmented Multi-Task Collaborative Framework for Unsupervised
Traffic Accident Detection in Driving Videos [22.553356096143734]
We propose a novel memory-augmented multi-task collaborative framework (MAMTCF) for unsupervised traffic accident detection in driving videos.
Our method can more accurately detect both ego-involved and non-ego accidents by simultaneously modeling appearance changes and object motions in video frames.
arXiv Detail & Related papers (2023-07-27T01:45:13Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Review on Action Recognition for Accident Detection in Smart City
Transportation Systems [0.0]
Monitoring traffic flows in a smart city using different surveillance cameras can play a significant role in recognizing accidents and alerting first responders.
The utilization of action recognition (AR) in computer vision tasks has contributed towards high-precision applications in video surveillance, medical imaging, and digital signal processing.
This paper provides potential research direction to develop and integrate accident detection systems for autonomous cars and public traffic safety systems.
arXiv Detail & Related papers (2022-08-20T03:21:44Z) - Real-Time Accident Detection in Traffic Surveillance Using Deep Learning [0.8808993671472349]
This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications.
The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method.
The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions.
arXiv Detail & Related papers (2022-08-12T19:07:20Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Training-free Monocular 3D Event Detection System for Traffic
Surveillance [93.65240041833319]
Existing event detection systems are mostly learning-based and have achieved convincing performance when a large amount of training data is available.
In real-world scenarios, collecting sufficient labeled training data is expensive and sometimes impossible.
We propose a training-free monocular 3D event detection system for traffic surveillance.
arXiv Detail & Related papers (2020-02-01T04:42:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.