Edge Computing for Real-Time Near-Crash Detection for Smart
Transportation Applications
- URL: http://arxiv.org/abs/2008.00549v3
- Date: Fri, 27 Aug 2021 05:48:36 GMT
- Title: Edge Computing for Real-Time Near-Crash Detection for Smart
Transportation Applications
- Authors: Ruimin Ke, Zhiyong Cui, Yanlong Chen, Meixin Zhu, Hao Yang, Yinhai
Wang
- Abstract summary: Traffic near-crash events serve as critical data sources for various smart transportation applications.
This paper leverages the power of edge computing to address these challenges by processing the video streams from existing dashcams onboard in a real-time manner.
It is among the first efforts in applying edge computing for real-time traffic video analytics and is expected to benefit multiple sub-fields in smart transportation research and applications.
- Score: 29.550609157368466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic near-crash events serve as critical data sources for various smart
transportation applications, such as being surrogate safety measures for
traffic safety research and corner case data for automated vehicle testing.
However, there are several key challenges for near-crash detection. First,
extracting near-crashes from original data sources requires significant
computing, communication, and storage resources. Also, existing methods lack
efficiency and transferability, which bottlenecks prospective large-scale
applications. To this end, this paper leverages the power of edge computing to
address these challenges by processing the video streams from existing dashcams
onboard in a real-time manner. We design a multi-thread system architecture
that operates on edge devices and model the bounding boxes generated by object
detection and tracking in linear complexity. The method is insensitive to
camera parameters and backward compatible with different vehicles. The edge
computing system has been evaluated with recorded videos and real-world tests
on two cars and four buses for over ten thousand hours. It filters out
irrelevant videos in real-time thereby saving labor cost, processing time,
network bandwidth, and data storage. It collects not only event videos but also
other valuable data such as road user type, event location, time to collision,
vehicle trajectory, vehicle speed, brake switch, and throttle. The experiments
demonstrate the promising performance of the system regarding efficiency,
accuracy, reliability, and transferability. It is among the first efforts in
applying edge computing for real-time traffic video analytics and is expected
to benefit multiple sub-fields in smart transportation research and
applications.
Related papers
- Real-Time Pedestrian Detection on IoT Edge Devices: A Lightweight Deep Learning Approach [1.4732811715354455]
This research explores implementing a lightweight deep learning model on Artificial Intelligence of Things (AIoT) edge devices.
An optimized You Only Look Once (YOLO) based DL model is deployed for real-time pedestrian detection.
The simulation results demonstrate that the optimized YOLO model can achieve real-time pedestrian detection, with a fast inference speed of 147 milliseconds, a frame rate of 2.3 frames per second, and an accuracy of 78%.
arXiv Detail & Related papers (2024-09-24T04:48:41Z) - Advance Real-time Detection of Traffic Incidents in Highways using Vehicle Trajectory Data [3.061662434597097]
This study uses vehicle trajectory data and traffic incident data on I-10, one of the most crash-prone highways in Louisiana.
Various machine learning algorithms are used to detect a trajectory that is likely to face an incident in the downstream road section.
Results suggest that the Random Forest model achieves the best performance for predicting an incident with reasonable recall value and discrimination capability.
arXiv Detail & Related papers (2024-08-15T00:51:48Z) - Application of 2D Homography for High Resolution Traffic Data Collection
using CCTV Cameras [9.946460710450319]
This study implements a three-stage video analytics framework for extracting high-resolution traffic data from CCTV cameras.
The key components of the framework include object recognition, perspective transformation, and vehicle trajectory reconstruction.
The results of the study showed about +/- 4.5% error rate for directional traffic counts, less than 10% MSE for speed bias between camera estimates.
arXiv Detail & Related papers (2024-01-14T07:33:14Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video
Surveillance [2.1076255329439304]
Existing datasets in traffic accidents are either small-scale, not from surveillance cameras, not open-sourced, or not built for freeway scenes.
After integration and annotation by various dimensions, a large-scale traffic accidents dataset named TAD is proposed in this work.
arXiv Detail & Related papers (2022-09-26T03:00:50Z) - Real-Time Accident Detection in Traffic Surveillance Using Deep Learning [0.8808993671472349]
This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications.
The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method.
The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions.
arXiv Detail & Related papers (2022-08-12T19:07:20Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - An Experimental Urban Case Study with Various Data Sources and a Model
for Traffic Estimation [65.28133251370055]
We organize an experimental campaign with video measurement in an area within the urban network of Zurich, Switzerland.
We focus on capturing the traffic state in terms of traffic flow and travel times by ensuring measurements from established thermal cameras.
We propose a simple yet efficient Multiple Linear Regression (MLR) model to estimate travel times with fusion of various data sources.
arXiv Detail & Related papers (2021-08-02T08:13:57Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.