Vehicle-Human Interactive Behaviors in Emergency: Data Extraction from
Traffic Accident Videos
- URL: http://arxiv.org/abs/2003.02059v2
- Date: Wed, 12 Aug 2020 04:10:05 GMT
- Title: Vehicle-Human Interactive Behaviors in Emergency: Data Extraction from
Traffic Accident Videos
- Authors: Wansong Liu, Danyang Luo, Changxu Wu, Minghui Zheng
- Abstract summary: Currently, studying the vehicle-human interactive behavior in the emergency needs a large amount of datasets in the actual emergent situations that are almost unavailable.
This paper provides a new yet convenient way to extract the interactive behavior data (i.e., the trajectories of vehicles and humans) from actual accident videos.
The main challenge for data extraction from real-time accident video lies in the fact that the recording cameras are un-calibrated and the angles of surveillance are unknown.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, studying the vehicle-human interactive behavior in the emergency
needs a large amount of datasets in the actual emergent situations that are
almost unavailable. Existing public data sources on autonomous vehicles (AVs)
mainly focus either on the normal driving scenarios or on emergency situations
without human involvement. To fill this gap and facilitate related research,
this paper provides a new yet convenient way to extract the interactive
behavior data (i.e., the trajectories of vehicles and humans) from actual
accident videos that were captured by both the surveillance cameras and driving
recorders. The main challenge for data extraction from real-time accident video
lies in the fact that the recording cameras are un-calibrated and the angles of
surveillance are unknown. The approach proposed in this paper employs image
processing to obtain a new perspective which is different from the original
video's perspective. Meanwhile, we manually detect and mark object feature
points in each image frame. In order to acquire a gradient of reference ratios,
a geometric model is implemented in the analysis of reference pixel value, and
the feature points are then scaled to the object trajectory based on the
gradient of ratios. The generated trajectories not only restore the object
movements completely but also reflect changes in vehicle velocity and rotation
based on the feature points distributions.
Related papers
- Application of 2D Homography for High Resolution Traffic Data Collection
using CCTV Cameras [9.946460710450319]
This study implements a three-stage video analytics framework for extracting high-resolution traffic data from CCTV cameras.
The key components of the framework include object recognition, perspective transformation, and vehicle trajectory reconstruction.
The results of the study showed about +/- 4.5% error rate for directional traffic counts, less than 10% MSE for speed bias between camera estimates.
arXiv Detail & Related papers (2024-01-14T07:33:14Z) - A Memory-Augmented Multi-Task Collaborative Framework for Unsupervised
Traffic Accident Detection in Driving Videos [22.553356096143734]
We propose a novel memory-augmented multi-task collaborative framework (MAMTCF) for unsupervised traffic accident detection in driving videos.
Our method can more accurately detect both ego-involved and non-ego accidents by simultaneously modeling appearance changes and object motions in video frames.
arXiv Detail & Related papers (2023-07-27T01:45:13Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - Real-Time Accident Detection in Traffic Surveillance Using Deep Learning [0.8808993671472349]
This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications.
The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method.
The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions.
arXiv Detail & Related papers (2022-08-12T19:07:20Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - Vehicle trajectory prediction in top-view image sequences based on deep
learning method [1.181206257787103]
Estimating and predicting surrounding vehicles' movement is essential for an automated vehicle and advanced safety systems.
A model with low computational complexity is proposed, which is trained by images taken from the road's aerial image.
The proposed model can predict the vehicle's future path in any freeway only by viewing the images related to the history of the target vehicle's movement and its neighbors.
arXiv Detail & Related papers (2021-02-02T20:48:19Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - AutoTrajectory: Label-free Trajectory Extraction and Prediction from
Videos using Dynamic Points [92.91569287889203]
We present a novel, label-free algorithm, AutoTrajectory, for trajectory extraction and prediction.
To better capture the moving objects in videos, we introduce dynamic points.
We aggregate dynamic points to instance points, which stand for moving objects such as pedestrians in videos.
arXiv Detail & Related papers (2020-07-11T08:43:34Z) - Towards Anomaly Detection in Dashcam Videos [9.558392439655012]
We propose to apply data-driven anomaly detection ideas from deep learning to dashcam videos.
We present a large and diverse dataset of truck dashcam videos, namely RetroTrucks.
We apply: (i) one-class classification loss and (ii) reconstruction-based loss, for anomaly detection on RetroTrucks.
arXiv Detail & Related papers (2020-04-11T00:10:40Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.