Automatic vehicle trajectory data reconstruction at scale
- URL: http://arxiv.org/abs/2212.07907v2
- Date: Sun, 5 Nov 2023 16:40:04 GMT
- Title: Automatic vehicle trajectory data reconstruction at scale
- Authors: Yanbing Wang, Derek Gloudemans, Junyi Ji, Zi Nean Teoh, Lisa Liu,
Gergely Zach\'ar, William Barbour, Daniel Work
- Abstract summary: We propose an automatic trajectory data reconciliation to correct common errors in vision-based vehicle trajectory data.
We show that the reconciled trajectories improve the accuracy on all the tested input data for a wide range of measures.
- Score: 2.010294990327175
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper we propose an automatic trajectory data reconciliation to
correct common errors in vision-based vehicle trajectory data. Given "raw"
vehicle detection and tracking information from automatic video processing
algorithms, we propose a pipeline including (a) an online data association
algorithm to match fragments that describe the same object (vehicle), which is
formulated as a min-cost network circulation problem of a graph, and (b) a
one-step trajectory rectification procedure formulated as a quadratic program
to enhance raw detection data. The pipeline leverages vehicle dynamics and
physical constraints to associate tracked objects when they become fragmented,
remove measurement noises and outliers and impute missing data due to
fragmentations. We assess the capability of the proposed two-step pipeline to
reconstruct three benchmarking datasets: (1) a microsimulation dataset that is
artificially downgraded to replicate upstream errors, (2) a 15-min NGSIM data
that is manually perturbed, and (3) tracking data consists of 3 scenes from
collections of video data recorded from 16-17 cameras on a section of the I-24
MOTION system, and compare with the corresponding manually-labeled ground truth
vehicle bounding boxes. All of the experiments show that the reconciled
trajectories improve the accuracy on all the tested input data for a wide range
of measures. Lastly, we show the design of a software architecture that is
currently deployed on the full-scale I-24 MOTION system consisting of 276
cameras that covers 4.2 miles of I-24. We demonstrate the scalability of the
proposed reconciliation pipeline to process high-volume data on a daily basis.
Related papers
- Application of 2D Homography for High Resolution Traffic Data Collection
using CCTV Cameras [9.946460710450319]
This study implements a three-stage video analytics framework for extracting high-resolution traffic data from CCTV cameras.
The key components of the framework include object recognition, perspective transformation, and vehicle trajectory reconstruction.
The results of the study showed about +/- 4.5% error rate for directional traffic counts, less than 10% MSE for speed bias between camera estimates.
arXiv Detail & Related papers (2024-01-14T07:33:14Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - SAPI: Surroundings-Aware Vehicle Trajectory Prediction at Intersections [4.982485708779067]
SAPI is a deep learning model to predict vehicle trajectories at intersections.
The proposed model consists of two convolutional network (CNN) and recurrent neural network (RNN)-based encoders and one decoder.
We evaluate SAPI on a proprietary dataset collected in real-world intersections through autonomous vehicles.
arXiv Detail & Related papers (2023-06-02T07:10:45Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - Real-Time Accident Detection in Traffic Surveillance Using Deep Learning [0.8808993671472349]
This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications.
The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method.
The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions.
arXiv Detail & Related papers (2022-08-12T19:07:20Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Cyclist Trajectory Forecasts by Incorporation of Multi-View Video
Information [2.984037222955095]
This article presents a novel approach to incorporate visual cues from video-data from a wide-angle stereo camera system mounted at an urban intersection into the forecast of cyclist trajectories.
We extract features from image and optical flow sequences using 3D convolutional neural networks (3D-ConvNet) and combine them with features extracted from the cyclist's past trajectory to forecast future cyclist positions.
arXiv Detail & Related papers (2021-06-30T11:34:43Z) - Object Tracking by Detection with Visual and Motion Cues [1.7818230914983044]
Self-driving cars need to detect and track objects in camera images.
We present a simple online tracking algorithm that is based on a constant velocity motion model with a Kalman filter.
We evaluate our approach on the challenging BDD100 dataset.
arXiv Detail & Related papers (2021-01-19T10:29:16Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - SoDA: Multi-Object Tracking with Soft Data Association [75.39833486073597]
Multi-object tracking (MOT) is a prerequisite for a safe deployment of self-driving cars.
We propose a novel approach to MOT that uses attention to compute track embeddings that encode dependencies between observed objects.
arXiv Detail & Related papers (2020-08-18T03:40:25Z) - Training-free Monocular 3D Event Detection System for Traffic
Surveillance [93.65240041833319]
Existing event detection systems are mostly learning-based and have achieved convincing performance when a large amount of training data is available.
In real-world scenarios, collecting sufficient labeled training data is expensive and sometimes impossible.
We propose a training-free monocular 3D event detection system for traffic surveillance.
arXiv Detail & Related papers (2020-02-01T04:42:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.