Real-Time And Robust 3D Object Detection with Roadside LiDARs
- URL: http://arxiv.org/abs/2207.05200v1
- Date: Mon, 11 Jul 2022 21:33:42 GMT
- Title: Real-Time And Robust 3D Object Detection with Roadside LiDARs
- Authors: Walter Zimmer, Jialong Wu, Xingcheng Zhou, Alois C. Knoll
- Abstract summary: We design a 3D object detection model that can detect traffic participants in roadside LiDARs in real-time.
Our model uses an existing 3D detector as a baseline and improves its accuracy.
We make a significant contribution with our LiDAR-based 3D detector that can be used for smart city applications.
- Score: 20.10416681832639
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work aims to address the challenges in autonomous driving by focusing on
the 3D perception of the environment using roadside LiDARs. We design a 3D
object detection model that can detect traffic participants in roadside LiDARs
in real-time. Our model uses an existing 3D detector as a baseline and improves
its accuracy. To prove the effectiveness of our proposed modules, we train and
evaluate the model on three different vehicle and infrastructure datasets. To
show the domain adaptation ability of our detector, we train it on an
infrastructure dataset from China and perform transfer learning on a different
dataset recorded in Germany. We do several sets of experiments and ablation
studies for each module in the detector that show that our model outperforms
the baseline by a significant margin, while the inference speed is at 45 Hz (22
ms). We make a significant contribution with our LiDAR-based 3D detector that
can be used for smart city applications to provide connected and automated
vehicles with a far-reaching view. Vehicles that are connected to the roadside
sensors can get information about other vehicles around the corner to improve
their path and maneuver planning and to increase road traffic safety.
Related papers
- Empowering Urban Traffic Management: Elevated 3D LiDAR for Data Collection and Advanced Object Detection Analysis [4.831084635928491]
This paper presents a novel framework that transforms the detection and analysis of 3D objects in traffic scenarios by utilizing the power of elevated LiDAR sensors.
Due to the limitation in obtaining real-world traffic datasets, we utilize the simulator to generate 3D point cloud for specific scenarios.
arXiv Detail & Related papers (2024-05-21T21:12:09Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from
Traffic Monitoring Cameras [26.457695296042903]
We propose SKoPe3D, a unique synthetic vehicle keypoint dataset from a roadside perspective.
SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints.
Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data.
arXiv Detail & Related papers (2023-09-04T02:57:30Z) - HUM3DIL: Semi-supervised Multi-modal 3D Human Pose Estimation for
Autonomous Driving [95.42203932627102]
3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians.
Our method efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin.
Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages.
arXiv Detail & Related papers (2022-12-15T11:15:14Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Cyber Mobility Mirror: Deep Learning-based Real-time 3D Object
Perception and Reconstruction Using Roadside LiDAR [14.566471856473813]
Cyber Mobility Mirror is a next-generation real-time traffic surveillance system for 3D object detection, classification, tracking, and reconstruction.
Results from field tests demonstrate that our prototype system can provide satisfactory perception performance with 96.99% precision and 83.62% recall.
High-fidelity real-time traffic conditions can be displayed on the GUI of the equipped vehicle with a frequency of 3-4 Hz.
arXiv Detail & Related papers (2022-02-28T01:58:24Z) - Weakly Supervised Training of Monocular 3D Object Detectors Using Wide
Baseline Multi-view Traffic Camera Data [19.63193201107591]
7DoF prediction of vehicles at an intersection is an important task for assessing potential conflicts between road users.
We develop an approach using a weakly supervised method of fine tuning 3D object detectors for traffic observation cameras.
Our method achieves vehicle 7DoF pose prediction accuracy on our dataset comparable to the top performing monocular 3D object detectors on autonomous vehicle datasets.
arXiv Detail & Related papers (2021-10-21T08:26:48Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.