A dataset for audio-video based vehicle speed estimation
- URL: http://arxiv.org/abs/2212.01651v1
- Date: Sat, 3 Dec 2022 17:02:57 GMT
- Title: A dataset for audio-video based vehicle speed estimation
- Authors: Slobodan Djukanovi\'c, Nikola Bulatovi\'c, Ivana \v{C}avor
- Abstract summary: We present a dataset of on-road audio-video recordings of single vehicles passing by a camera at known speeds.
The dataset is fully available and intended as a public benchmark to facilitate research in audio-video vehicle speed estimation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate speed estimation of road vehicles is important for several reasons.
One is speed limit enforcement, which represents a crucial tool in decreasing
traffic accidents and fatalities. Compared with other research areas and
domains, the number of available datasets for vehicle speed estimation is still
very limited. We present a dataset of on-road audio-video recordings of single
vehicles passing by a camera at known speeds, maintained stable by the on-board
cruise control. The dataset contains thirteen vehicles, selected to be as
diverse as possible in terms of manufacturer, production year, engine type,
power and transmission, resulting in a total of $ 400 $ annotated audio-video
recordings. The dataset is fully available and intended as a public benchmark
to facilitate research in audio-video vehicle speed estimation. In addition to
the dataset, we propose a cross-validation strategy which can be used in a
machine learning model for vehicle speed estimation. Two approaches to
training-validation split of the dataset are proposed.
Related papers
- Multi-Source Urban Traffic Flow Forecasting with Drone and Loop Detector Data [61.9426776237409]
Drone-captured data can create an accurate multi-sensor mobility observatory for large-scale urban networks.
A simple yet effective graph-based model HiMSNet is proposed to integrate multiple data modalities and learn-temporal correlations.
arXiv Detail & Related papers (2025-01-07T03:23:28Z) - Deep Learning Enhanced Road Traffic Analysis: Scalable Vehicle Detection and Velocity Estimation Using PlanetScope Imagery [38.22365259129059]
This paper presents a method for detecting and estimating vehicle speeds using PlanetScope SuperDove satellite imagery.
We propose a Keypoint R-CNN model to track vehicle trajectories across RGB bands, leveraging band timing differences to estimate speed.
Results from drone comparison reveal underestimations, with average speeds of 112.85 km/h for satellite data versus 131.83 km/h from drone footage.
arXiv Detail & Related papers (2024-10-04T18:14:07Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - FARSEC: A Reproducible Framework for Automatic Real-Time Vehicle Speed
Estimation Using Traffic Cameras [14.339217121537537]
Transportation-dependent systems, such as for navigation and logistics, have great potential to benefit from reliable speed estimation.
We provide a novel framework for automatic real-time vehicle speed calculation, which copes with more diverse data from publicly available traffic cameras.
Our framework is capable of handling realistic conditions such as camera movements and different video stream inputs automatically.
arXiv Detail & Related papers (2023-09-25T19:02:40Z) - DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving [76.29141888408265]
We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
arXiv Detail & Related papers (2023-04-03T17:37:00Z) - Unsupervised Driving Event Discovery Based on Vehicle CAN-data [62.997667081978825]
This work presents a simultaneous clustering and segmentation approach for vehicle CAN-data that identifies common driving events in an unsupervised manner.
We evaluate our approach with a dataset of real Tesla Model 3 vehicle CAN-data and a two-hour driving session that we annotated with different driving events.
arXiv Detail & Related papers (2023-01-12T13:10:47Z) - Motion Planning and Control for Multi Vehicle Autonomous Racing at High
Speeds [100.61456258283245]
This paper presents a multi-layer motion planning and control architecture for autonomous racing.
The proposed solution has been applied on a Dallara AV-21 racecar and tested at oval race tracks achieving lateral accelerations up to 25 $m/s2$.
arXiv Detail & Related papers (2022-07-22T15:16:54Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs [72.67604044776662]
We tackle the problem of of-temporal tagging of self-driving scenes from raw sensor data.
Our approach learns a universal embedding for all tags, enabling efficient tagging of many attributes and faster learning of new attributes with limited data.
arXiv Detail & Related papers (2020-11-12T02:18:16Z) - Edge Computing for Real-Time Near-Crash Detection for Smart
Transportation Applications [29.550609157368466]
Traffic near-crash events serve as critical data sources for various smart transportation applications.
This paper leverages the power of edge computing to address these challenges by processing the video streams from existing dashcams onboard in a real-time manner.
It is among the first efforts in applying edge computing for real-time traffic video analytics and is expected to benefit multiple sub-fields in smart transportation research and applications.
arXiv Detail & Related papers (2020-08-02T19:39:14Z) - Vehicle Position Estimation with Aerial Imagery from Unmanned Aerial
Vehicles [4.555256739812733]
This work describes a process to estimate a precise vehicle position from aerial imagery.
The state-of-the-art deep neural network Mask-RCNN is applied for that purpose.
A mean accuracy of 20 cm can be achieved with flight altitudes up to 100 m, Full-HD resolution and a frame-by-frame detection.
arXiv Detail & Related papers (2020-04-17T12:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.