Real Time Monocular Vehicle Velocity Estimation using Synthetic Data
- URL: http://arxiv.org/abs/2109.07957v1
- Date: Thu, 16 Sep 2021 13:10:27 GMT
- Title: Real Time Monocular Vehicle Velocity Estimation using Synthetic Data
- Authors: Robert McCraith, Lukas Neumann, Andrea Vedaldi
- Abstract summary: We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
- Score: 78.85123603488664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision is one of the primary sensing modalities in autonomous driving. In
this paper we look at the problem of estimating the velocity of road vehicles
from a camera mounted on a moving car. Contrary to prior methods that train
end-to-end deep networks that estimate the vehicles' velocity from the video
pixels, we propose a two-step approach where first an off-the-shelf tracker is
used to extract vehicle bounding boxes and then a small neural network is used
to regress the vehicle velocity from the tracked bounding boxes. Surprisingly,
we find that this still achieves state-of-the-art estimation performance with
the significant benefit of separating perception from dynamics estimation via a
clean, interpretable and verifiable interface which allows us distill the
statistics which are crucial for velocity estimation. We show that the latter
can be used to easily generate synthetic training data in the space of bounding
boxes and use this to improve the performance of our method further.
Related papers
- FARSEC: A Reproducible Framework for Automatic Real-Time Vehicle Speed
Estimation Using Traffic Cameras [14.339217121537537]
Transportation-dependent systems, such as for navigation and logistics, have great potential to benefit from reliable speed estimation.
We provide a novel framework for automatic real-time vehicle speed calculation, which copes with more diverse data from publicly available traffic cameras.
Our framework is capable of handling realistic conditions such as camera movements and different video stream inputs automatically.
arXiv Detail & Related papers (2023-09-25T19:02:40Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [57.19891435386843]
We present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view.
Our model runs at 25 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
arXiv Detail & Related papers (2022-11-15T13:52:41Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Multi-Stream Attention Learning for Monocular Vehicle Velocity and
Inter-Vehicle Distance Estimation [25.103483428654375]
Vehicle velocity and inter-vehicle distance estimation are essential for ADAS (Advanced driver-assistance systems) and autonomous vehicles.
Recent studies focus on using a low-cost monocular camera to perceive the environment around the vehicle in a data-driven fashion.
MSANet is proposed to extract different aspects of features, e.g., spatial and contextual features, for joint vehicle velocity and inter-vehicle distance estimation.
arXiv Detail & Related papers (2021-10-22T06:14:12Z) - Data-driven vehicle speed detection from synthetic driving simulator
images [0.440401067183266]
We explore the use of synthetic images generated from a driving simulator to address vehicle speed detection.
We generate thousands of images with variability corresponding to multiple speeds, different vehicle types and colors, and lighting and weather conditions.
Two different approaches to map the sequence of images to an output speed (regression) are studied, including CNN-GRU and 3D-CNN.
arXiv Detail & Related papers (2021-04-20T11:26:13Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z) - Traffic Data Imputation using Deep Convolutional Neural Networks [2.7647400328727256]
We show that a well trained neural network can learn traffic speed dynamics from time-space diagrams.
Our results show that with vehicle penetration probe levels as low as 5%, the proposed estimation method can provide a sound reconstruction of macroscopic traffic speeds.
arXiv Detail & Related papers (2020-01-21T12:52:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.