Data-driven vehicle speed detection from synthetic driving simulator
images
- URL: http://arxiv.org/abs/2104.09903v1
- Date: Tue, 20 Apr 2021 11:26:13 GMT
- Title: Data-driven vehicle speed detection from synthetic driving simulator
images
- Authors: Antonio Hern\'andez Mart\'inez, Javier Lorenzo D\'iaz, Iv\'an Garc\'ia
Daza, David Fern\'andez Llorca
- Abstract summary: We explore the use of synthetic images generated from a driving simulator to address vehicle speed detection.
We generate thousands of images with variability corresponding to multiple speeds, different vehicle types and colors, and lighting and weather conditions.
Two different approaches to map the sequence of images to an output speed (regression) are studied, including CNN-GRU and 3D-CNN.
- Score: 0.440401067183266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite all the challenges and limitations, vision-based vehicle speed
detection is gaining research interest due to its great potential benefits such
as cost reduction, and enhanced additional functions. As stated in a recent
survey [1], the use of learning-based approaches to address this problem is
still in its infancy. One of the main difficulties is the need for a large
amount of data, which must contain the input sequences and, more importantly,
the output values corresponding to the actual speed of the vehicles. Data
collection in this context requires a complex and costly setup to capture the
images from the camera synchronized with a high precision speed sensor to
generate the ground truth speed values. In this paper we explore, for the first
time, the use of synthetic images generated from a driving simulator (e.g.,
CARLA) to address vehicle speed detection using a learning-based approach. We
simulate a virtual camera placed over a stretch of road, and generate thousands
of images with variability corresponding to multiple speeds, different vehicle
types and colors, and lighting and weather conditions. Two different approaches
to map the sequence of images to an output speed (regression) are studied,
including CNN-GRU and 3D-CNN. We present preliminary results that support the
high potential of this approach to address vehicle speed detection.
Related papers
- Digital twins to alleviate the need for real field data in vision-based vehicle speed detection systems [0.9899633398596672]
Accurate vision-based speed estimation is more cost-effective than traditional methods based on radar or LiDAR.
Deep learning approaches are very limited in this context due to the lack of available data.
In this work, we propose the use of digital-twins using CARLA simulator to generate a large dataset representative of a specific real-world camera.
arXiv Detail & Related papers (2024-07-11T10:41:20Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - Nighttime Driver Behavior Prediction Using Taillight Signal Recognition
via CNN-SVM Classifier [2.44755919161855]
This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles.
The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road.
To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images.
arXiv Detail & Related papers (2023-10-25T15:23:33Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - Towards view-invariant vehicle speed detection from driving simulator
images [0.31498833540989407]
We address the question of whether complex 3D-CNN architectures are capable of implicitly learning view-invariant speeds using a single model.
The results are very promising as they show that a single model with data from multiple views reports even better accuracy than camera-specific models.
arXiv Detail & Related papers (2022-06-01T09:14:45Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.