CitySim: A Drone-Based Vehicle Trajectory Dataset for Safety Oriented
Research and Digital Twins
- URL: http://arxiv.org/abs/2208.11036v2
- Date: Mon, 31 Jul 2023 05:04:11 GMT
- Title: CitySim: A Drone-Based Vehicle Trajectory Dataset for Safety Oriented
Research and Digital Twins
- Authors: Ou Zheng, Mohamed Abdel-Aty, Lishengsa Yue, Amr Abdelraouf, Zijin
Wang, Nada Mahmoud
- Abstract summary: CitySim has vehicle trajectories extracted from 1140 minutes of drone videos recorded at 12 locations.
CitySim was generated through a five-step procedure that ensured trajectory accuracy.
- Score: 1.981804802324697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of safety-oriented research and applications requires
fine-grain vehicle trajectories that not only have high accuracy, but also
capture substantial safety-critical events. However, it would be challenging to
satisfy both these requirements using the available vehicle trajectory datasets
do not have the capacity to satisfy both.This paper introduces the CitySim
dataset that has the core objective of facilitating safety-oriented research
and applications. CitySim has vehicle trajectories extracted from 1140 minutes
of drone videos recorded at 12 locations. It covers a variety of road
geometries including freeway basic segments, signalized intersections,
stop-controlled intersections, and control-free intersections. CitySim was
generated through a five-step procedure that ensured trajectory accuracy. The
five-step procedure included video stabilization, object filtering, multi-video
stitching, object detection and tracking, and enhanced error filtering.
Furthermore, CitySim provides the rotated bounding box information of a
vehicle, which was demonstrated to improve safety evaluations. Compared with
other video-based critical events, including cut-in, merge, and diverge events,
which were validated by distributions of both minimum time-to-collision and
minimum post-encroachment time. In addition, CitySim had the capability to
facilitate digital-twin-related research by providing relevant assets, such as
the recording locations' three-dimensional base maps and signal timings.
Related papers
- Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery [4.387337528923525]
This paper presents a framework for extracting georeferenced vehicle trajectories from high-altitude drone footage.
We employ state-of-the-art computer vision and deep learning to create an end-to-end pipeline.
Results demonstrate the potential of integrating drone technology with advanced computer vision for precise, cost-effective urban traffic monitoring.
arXiv Detail & Related papers (2024-11-04T14:49:01Z) - RoboSense: Large-scale Dataset and Benchmark for Multi-sensor Low-speed Autonomous Driving [62.5830455357187]
In this paper, we construct a multimodal data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multi-sensor dataset is built, named RoboSense, to facilitate near-field scene understanding.
RoboSense contains more than 133K synchronized data with 1.4M 3D bounding box and IDs in the full $360circ$ view, forming 216K trajectories across 7.6K temporal sequences.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - Vehicle Perception from Satellite [54.07157185000604]
The dataset is constructed based on 12 satellite videos and 14 synthetic videos recorded from GTA-V.
It supports several tasks, including tiny object detection, counting and density estimation.
128,801 vehicles are annotated totally, and the number of vehicles in each image varies from 0 to 101.
arXiv Detail & Related papers (2024-02-01T15:59:16Z) - Application of 2D Homography for High Resolution Traffic Data Collection
using CCTV Cameras [9.946460710450319]
This study implements a three-stage video analytics framework for extracting high-resolution traffic data from CCTV cameras.
The key components of the framework include object recognition, perspective transformation, and vehicle trajectory reconstruction.
The results of the study showed about +/- 4.5% error rate for directional traffic counts, less than 10% MSE for speed bias between camera estimates.
arXiv Detail & Related papers (2024-01-14T07:33:14Z) - SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from
Traffic Monitoring Cameras [26.457695296042903]
We propose SKoPe3D, a unique synthetic vehicle keypoint dataset from a roadside perspective.
SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints.
Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data.
arXiv Detail & Related papers (2023-09-04T02:57:30Z) - OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping [84.65114565766596]
We present OpenLane-V2, the first dataset on topology reasoning for traffic scene structure.
OpenLane-V2 consists of 2,000 annotated road scenes that describe traffic elements and their correlation to the lanes.
We evaluate various state-of-the-art methods, and present their quantitative and qualitative results on OpenLane-V2 to indicate future avenues for investigating topology reasoning in traffic scenes.
arXiv Detail & Related papers (2023-04-20T16:31:22Z) - Real-Time Accident Detection in Traffic Surveillance Using Deep Learning [0.8808993671472349]
This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications.
The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method.
The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions.
arXiv Detail & Related papers (2022-08-12T19:07:20Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - An Experimental Urban Case Study with Various Data Sources and a Model
for Traffic Estimation [65.28133251370055]
We organize an experimental campaign with video measurement in an area within the urban network of Zurich, Switzerland.
We focus on capturing the traffic state in terms of traffic flow and travel times by ensuring measurements from established thermal cameras.
We propose a simple yet efficient Multiple Linear Regression (MLR) model to estimate travel times with fusion of various data sources.
arXiv Detail & Related papers (2021-08-02T08:13:57Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Edge Computing for Real-Time Near-Crash Detection for Smart
Transportation Applications [29.550609157368466]
Traffic near-crash events serve as critical data sources for various smart transportation applications.
This paper leverages the power of edge computing to address these challenges by processing the video streams from existing dashcams onboard in a real-time manner.
It is among the first efforts in applying edge computing for real-time traffic video analytics and is expected to benefit multiple sub-fields in smart transportation research and applications.
arXiv Detail & Related papers (2020-08-02T19:39:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.