An Efficient Approach to Generate Safe Drivable Space by LiDAR-Camera-HDmap Fusion
- URL: http://arxiv.org/abs/2410.22314v1
- Date: Tue, 29 Oct 2024 17:54:02 GMT
- Title: An Efficient Approach to Generate Safe Drivable Space by LiDAR-Camera-HDmap Fusion
- Authors: Minghao Ning, Ahmad Reza Alghooneh, Chen Sun, Ruihe Zhang, Pouya Panahandeh, Steven Tuer, Ehsan Hashemi, Amir Khajepour,
- Abstract summary: We propose an accurate and robust perception module for Autonomous Vehicles (AVs) for drivable space extraction.
Our work introduces a robust easy-to-generalize perception module that leverages LiDAR, camera, and HD map data fusion.
Our approach is tested on a real dataset and its reliability is verified during the daily (including harsh snowy weather) operation of our autonomous shuttle, WATonoBus.
- Score: 13.451123257796972
- License:
- Abstract: In this paper, we propose an accurate and robust perception module for Autonomous Vehicles (AVs) for drivable space extraction. Perception is crucial in autonomous driving, where many deep learning-based methods, while accurate on benchmark datasets, fail to generalize effectively, especially in diverse and unpredictable environments. Our work introduces a robust easy-to-generalize perception module that leverages LiDAR, camera, and HD map data fusion to deliver a safe and reliable drivable space in all weather conditions. We present an adaptive ground removal and curb detection method integrated with HD map data for enhanced obstacle detection reliability. Additionally, we propose an adaptive DBSCAN clustering algorithm optimized for precipitation noise, and a cost-effective LiDAR-camera frustum association that is resilient to calibration discrepancies. Our comprehensive drivable space representation incorporates all perception data, ensuring compatibility with vehicle dimensions and road regulations. This approach not only improves generalization and efficiency, but also significantly enhances safety in autonomous vehicle operations. Our approach is tested on a real dataset and its reliability is verified during the daily (including harsh snowy weather) operation of our autonomous shuttle, WATonoBus
Related papers
- UFO: Uncertainty-aware LiDAR-image Fusion for Off-road Semantic Terrain
Map Estimation [2.048226951354646]
This paper presents a learning-based fusion method for generating dense terrain classification maps in BEV.
Our approach enhances the accuracy of semantic maps generated from an RGB image and a single-sweep LiDAR scan.
arXiv Detail & Related papers (2024-03-05T04:20:03Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Channel Boosting Feature Ensemble for Radar-based Object Detection [6.810856082577402]
Radar-based object detection is explored provides a counterpart sensor modality to be deployed and used in adverse weather conditions.
The proposed method's efficacy is extensively evaluated using the COCO evaluation metric.
arXiv Detail & Related papers (2021-01-10T12:20:58Z) - Learning to Localize Using a LiDAR Intensity Map [87.04427452634445]
We propose a real-time, calibration-agnostic and effective localization system for self-driving cars.
Our method learns to embed the online LiDAR sweeps and intensity map into a joint deep embedding space.
Our full system can operate in real-time at 15Hz while achieving centimeter level accuracy across different LiDAR sensors and environments.
arXiv Detail & Related papers (2020-12-20T11:56:23Z) - Robust Autonomous Landing of UAV in Non-Cooperative Environments based
on Dynamic Time Camera-LiDAR Fusion [11.407952542799526]
We construct a UAV system equipped with low-cost LiDAR and binocular cameras to realize autonomous landing in non-cooperative environments.
Taking advantage of the non-repetitive scanning and high FOV coverage characteristics of LiDAR, we come up with a dynamic time depth completion algorithm.
Based on the depth map, the high-level terrain information such as slope, roughness, and the size of the safe area are derived.
arXiv Detail & Related papers (2020-11-27T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.