LiDAR View Synthesis for Robust Vehicle Navigation Without Expert Labels
- URL: http://arxiv.org/abs/2308.01424v2
- Date: Sat, 5 Aug 2023 19:25:14 GMT
- Title: LiDAR View Synthesis for Robust Vehicle Navigation Without Expert Labels
- Authors: Jonathan Schmidt, Qadeer Khan, Daniel Cremers
- Abstract summary: We propose synthesizing additional LiDAR point clouds from novel viewpoints without physically driving at dangerous positions.
We train a deep learning model, which takes a LiDAR scan as input and predicts the future trajectory as output.
A waypoint controller is then applied to this predicted trajectory to determine the throttle and steering labels of the ego-vehicle.
- Score: 50.40632021583213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models for self-driving cars require a diverse training dataset
to manage critical driving scenarios on public roads safely. This includes
having data from divergent trajectories, such as the oncoming traffic lane or
sidewalks. Such data would be too dangerous to collect in the real world. Data
augmentation approaches have been proposed to tackle this issue using RGB
images. However, solutions based on LiDAR sensors are scarce. Therefore, we
propose synthesizing additional LiDAR point clouds from novel viewpoints
without physically driving at dangerous positions. The LiDAR view synthesis is
done using mesh reconstruction and ray casting. We train a deep learning model,
which takes a LiDAR scan as input and predicts the future trajectory as output.
A waypoint controller is then applied to this predicted trajectory to determine
the throttle and steering labels of the ego-vehicle. Our method neither
requires expert driving labels for the original nor the synthesized LiDAR
sequence. Instead, we infer labels from LiDAR odometry. We demonstrate the
effectiveness of our approach in a comprehensive online evaluation and with a
comparison to concurrent work. Our results show the importance of synthesizing
additional LiDAR point clouds, particularly in terms of model robustness.
Project page: https://jonathsch.github.io/lidar-synthesis/
Related papers
- LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - TeFF: Tracking-enhanced Forgetting-free Few-shot 3D LiDAR Semantic Segmentation [10.628870775939161]
This paper addresses the limitations of current few-shot semantic segmentation by exploiting the temporal continuity of LiDAR data.
We employ a tracking model to generate pseudo-ground-truths from a sequence of LiDAR frames, enhancing the dataset's ability to learn on novel classes.
We incorporate LoRA, a technique that reduces the number of trainable parameters, thereby preserving the model's performance on base classes while improving its adaptability to novel classes.
arXiv Detail & Related papers (2024-08-28T09:18:36Z) - UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation [51.443788294845845]
We present UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR generation, and LiDAR manipulation.
We show that by aligning the representation of a sparse point cloud to that of a dense point cloud, we can densify the sparse point clouds.
By learning a prior over the discrete codebook, we can generate diverse, realistic LiDAR point clouds for self-driving.
arXiv Detail & Related papers (2023-11-02T17:57:03Z) - Advancements in 3D Lane Detection Using LiDAR Point Clouds: From Data Collection to Model Development [10.78971892551972]
LiSV-3DLane is a large-scale 3D lane dataset that comprises 20k frames of surround-view LiDAR point clouds with enriched semantic annotation.
We propose a novel LiDAR-based 3D lane detection model, LiLaDet, incorporating the spatial geometry learning of the LiDAR point cloud into Bird's Eye View (BEV) based lane identification.
arXiv Detail & Related papers (2023-09-24T09:58:49Z) - NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance
Fields [20.887421720818892]
We present NeRF-LIDAR, a novel LiDAR simulation method that leverages real-world information to generate realistic LIDAR point clouds.
We verify the effectiveness of our NeRF-LiDAR by training different 3D segmentation models on the generated LiDAR point clouds.
arXiv Detail & Related papers (2023-04-28T12:41:28Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework [20.842800465250775]
Current methods rely on point clouds from the LiDAR sensor as queries to leverage the feature from the image space.
We propose a surprisingly simple yet novel fusion framework, dubbed BEVFusion, whose camera stream does not depend on the input of LiDAR data.
We empirically show that our framework surpasses the state-of-the-art methods under the normal training settings.
arXiv Detail & Related papers (2022-05-27T06:58:30Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.