Extraction and Assessment of Naturalistic Human Driving Trajectories
from Infrastructure Camera and Radar Sensors
- URL: http://arxiv.org/abs/2004.01288v1
- Date: Thu, 2 Apr 2020 22:28:29 GMT
- Title: Extraction and Assessment of Naturalistic Human Driving Trajectories
from Infrastructure Camera and Radar Sensors
- Authors: Dominik Notz, Felix Becker, Thomas K\"uhbeck, Daniel Watzenig
- Abstract summary: We present a novel methodology to extract trajectories of traffic objects using infrastructure sensors.
Our vision pipeline accurately detects objects, fuses camera and radar detections and tracks them over time.
We show that our sensor fusion approach successfully combines the advantages of camera and radar detections and outperforms either single sensor.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collecting realistic driving trajectories is crucial for training machine
learning models that imitate human driving behavior. Most of today's autonomous
driving datasets contain only a few trajectories per location and are recorded
with test vehicles that are cautiously driven by trained drivers. In particular
in interactive scenarios such as highway merges, the test driver's behavior
significantly influences other vehicles. This influence prevents recording the
whole traffic space of human driving behavior. In this work, we present a novel
methodology to extract trajectories of traffic objects using infrastructure
sensors. Infrastructure sensors allow us to record a lot of data for one
location and take the test drivers out of the loop. We develop both a hardware
setup consisting of a camera and a traffic surveillance radar and a trajectory
extraction algorithm. Our vision pipeline accurately detects objects, fuses
camera and radar detections and tracks them over time. We improve a
state-of-the-art object tracker by combining the tracking in image coordinates
with a Kalman filter in road coordinates. We show that our sensor fusion
approach successfully combines the advantages of camera and radar detections
and outperforms either single sensor. Finally, we also evaluate the accuracy of
our trajectory extraction pipeline. For that, we equip our test vehicle with a
differential GPS sensor and use it to collect ground truth trajectories. With
this data we compute the measurement errors. While we use the mean error to
de-bias the trajectories, the error standard deviation is in the magnitude of
the ground truth data inaccuracy. Hence, the extracted trajectories are not
only naturalistic but also highly accurate and prove the potential of using
infrastructure sensors to extract real-world trajectories.
Related papers
- Learning 3D Perception from Others' Predictions [64.09115694891679]
We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector.
For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area.
arXiv Detail & Related papers (2024-10-03T16:31:28Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Radar-based Automotive Localization using Landmarks in a Multimodal
Sensor Graph-based Approach [0.0]
In this paper, we address the problem of localization with automotive-grade radars.
The system uses landmarks and odometry information as an abstraction layer.
A single, semantic landmark map is used and maintained for all sensors.
arXiv Detail & Related papers (2021-04-29T07:35:20Z) - On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks [25.838878314196375]
We evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate.
The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel.
arXiv Detail & Related papers (2021-04-23T18:58:37Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - CurbScan: Curb Detection and Tracking Using Multi-Sensor Fusion [0.8722958995761769]
Curb detection and tracking are useful in vehicle localization and path planning.
We propose an approach to detect and track curbs by fusing together data from multiple sensors.
Our algorithm maintains over 90% accuracy within 4.5-22 meters and 0-14 meters for the KITTI dataset and our dataset respectively.
arXiv Detail & Related papers (2020-10-09T22:48:20Z) - High-Precision Digital Traffic Recording with Multi-LiDAR Infrastructure
Sensor Setups [0.0]
We investigate the impact of fused LiDAR point clouds compared to single LiDAR point clouds.
The evaluation of the extracted trajectories shows that a fused infrastructure approach significantly increases the tracking results and reaches accuracies within a few centimeters.
arXiv Detail & Related papers (2020-06-22T10:57:52Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.