LEAD: LiDAR Extender for Autonomous Driving
- URL: http://arxiv.org/abs/2102.07989v1
- Date: Tue, 16 Feb 2021 07:35:34 GMT
- Title: LEAD: LiDAR Extender for Autonomous Driving
- Authors: Jianing Zhang, Wei Li, Honggang Gou, Lu Fang, Ruigang Yang
- Abstract summary: MEMS LiDAR emerges with irresistible trend due to its lower cost, more robust, and meeting the mass-production standards.
It suffers small field of view (FoV), slowing down the step of its population.
We propose LEAD, i.e., LiDAR Extender for Autonomous Driving, to extend the MEMS LiDAR by coupled image w.r.t both FoV and range.
- Score: 48.233424487002445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D perception using sensors under vehicle industrial standard is the rigid
demand in autonomous driving. MEMS LiDAR emerges with irresistible trend due to
its lower cost, more robust, and meeting the mass-production standards.
However, it suffers small field of view (FoV), slowing down the step of its
population. In this paper, we propose LEAD, i.e., LiDAR Extender for Autonomous
Driving, to extend the MEMS LiDAR by coupled image w.r.t both FoV and range. We
propose a multi-stage propagation strategy based on depth distributions and
uncertainty map, which shows effective propagation ability. Moreover, our depth
outpainting/propagation network follows a teacher-student training fashion,
which transfers depth estimation ability to depth completion network without
any scale error passed. To validate the LiDAR extension quality, we utilize a
high-precise laser scanner to generate a ground-truth dataset. Quantitative and
qualitative evaluations show that our scheme outperforms SOTAs with a large
margin. We believe the proposed LEAD along with the dataset would benefit the
community w.r.t depth researches.
Related papers
- DurLAR: A High-fidelity 128-channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-modal Autonomous Driving Applications [21.066770408683265]
DurLAR is a high-fidelity 128-channel 3D LiDAR dataset with panoramic ambient (near infrared) and reflectivity imagery.
Our evaluation shows our joint use supervised and self-supervised loss terms, enabled via the superior ground truth resolution.
arXiv Detail & Related papers (2024-06-14T14:24:05Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation [51.443788294845845]
We present UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR generation, and LiDAR manipulation.
We show that by aligning the representation of a sparse point cloud to that of a dense point cloud, we can densify the sparse point clouds.
By learning a prior over the discrete codebook, we can generate diverse, realistic LiDAR point clouds for self-driving.
arXiv Detail & Related papers (2023-11-02T17:57:03Z) - LiDAR View Synthesis for Robust Vehicle Navigation Without Expert Labels [50.40632021583213]
We propose synthesizing additional LiDAR point clouds from novel viewpoints without physically driving at dangerous positions.
We train a deep learning model, which takes a LiDAR scan as input and predicts the future trajectory as output.
A waypoint controller is then applied to this predicted trajectory to determine the throttle and steering labels of the ego-vehicle.
arXiv Detail & Related papers (2023-08-02T20:46:43Z) - LiDAR Meta Depth Completion [47.99004789132264]
We propose a meta depth completion network that uses data patterns to learn a task network to solve a given depth completion task effectively.
While using a single model, our method yields significantly better results than a non-adaptive baseline trained on different LiDAR patterns.
These advantages allow flexible deployment of a single depth completion model on different sensors.
arXiv Detail & Related papers (2023-07-24T13:05:36Z) - Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR [22.202192422883122]
We propose a novel two-stage network to advance the self-supervised monocular dense depth learning.
Our model fuses monocular image features and sparse LiDAR features to predict initial depth maps.
Our model outperforms the state-of-the-art sparse-LiDAR-based method (Pseudo-LiDAR++) by more than 68% for the downstream task monocular 3D object detection.
arXiv Detail & Related papers (2021-09-20T15:28:36Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.