Detection of 3D Bounding Boxes of Vehicles Using Perspective
Transformation for Accurate Speed Measurement
- URL: http://arxiv.org/abs/2003.13137v2
- Date: Tue, 4 Aug 2020 23:19:58 GMT
- Title: Detection of 3D Bounding Boxes of Vehicles Using Perspective
Transformation for Accurate Speed Measurement
- Authors: Viktor Kocur and Milan Ft\'a\v{c}nik
- Abstract summary: We present an improved version of our algorithm for detection of 3D bounding boxes of vehicles captured by traffic surveillance cameras.
Our algorithm utilizes the known geometry of vanishing points in the surveilled scene to construct a perspective transformation.
Compared to other published state-of-the-art fully automatic results our algorithm reduces the mean absolute speed measurement error by 32% (1.10 km/h to 0.75 km/h) and the absolute median error by 40% (0.97 km/h to 0.58 km/h)
- Score: 3.8073142980733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detection and tracking of vehicles captured by traffic surveillance cameras
is a key component of intelligent transportation systems. We present an
improved version of our algorithm for detection of 3D bounding boxes of
vehicles, their tracking and subsequent speed estimation. Our algorithm
utilizes the known geometry of vanishing points in the surveilled scene to
construct a perspective transformation. The transformation enables an intuitive
simplification of the problem of detecting 3D bounding boxes to detection of 2D
bounding boxes with one additional parameter using a standard 2D object
detector. Main contribution of this paper is an improved construction of the
perspective transformation which is more robust and fully automatic and an
extended experimental evaluation of speed estimation. We test our algorithm on
the speed estimation task of the BrnoCompSpeed dataset. We evaluate our
approach with different configurations to gauge the relationship between
accuracy and computational costs and benefits of 3D bounding box detection over
2D detection. All of the tested configurations run in real-time and are fully
automatic. Compared to other published state-of-the-art fully automatic results
our algorithm reduces the mean absolute speed measurement error by 32% (1.10
km/h to 0.75 km/h) and the absolute median error by 40% (0.97 km/h to 0.58
km/h).
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - KAN-RCBEVDepth: A multi-modal fusion algorithm in object detection for autonomous driving [2.382388777981433]
This paper introduces the KAN-RCBEVDepth method to enhance 3D object detection in autonomous driving.
Our unique Bird's Eye View-based approach significantly improves detection accuracy and efficiency.
The code will be released in urlhttps://www.laitiamo.com/laitiamo/RCBEVDepth-KAN.
arXiv Detail & Related papers (2024-08-04T16:54:49Z) - oTTC: Object Time-to-Contact for Motion Estimation in Autonomous Driving [4.707950656037167]
autonomous driving systems rely heavily on object detection to avoid collisions and drive safely.
Monocular 3D object detectors try to solve this problem by directly predicting 3D bounding boxes and object velocities given a camera image.
Recent research estimates time-to-contact in a per-pixel manner and suggests that it is more effective measure than velocity and depth combined.
We propose per-object time-to-contact estimation by extending object detection models to additionally predict the time-to-contact attribute for each object.
arXiv Detail & Related papers (2024-05-13T12:34:18Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - Achieving Real-Time Object Detection on MobileDevices with Neural
Pruning Search [45.20331644857981]
We propose a compiler-aware neural pruning search framework to achieve high-speed inference on autonomous vehicles for 2D and 3D object detection.
For the first time, the proposed method achieves computation (close-to) real-time, 55ms and 99ms inference times for YOLOv4 based 2D object detection and PointPillars based 3D detection.
arXiv Detail & Related papers (2021-06-28T18:59:20Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - Single-Shot 3D Detection of Vehicles from Monocular RGB Images via
Geometry Constrained Keypoints in Real-Time [6.82446891805815]
We propose a novel 3D single-shot object detection method for detecting vehicles in monocular RGB images.
Our approach lifts 2D detections to 3D space by predicting additional regression and classification parameters.
We test our approach on different datasets for autonomous driving and evaluate it using the challenging KITTI 3D Object Detection and the novel nuScenes Object Detection benchmarks.
arXiv Detail & Related papers (2020-06-23T15:10:19Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.