UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation
- URL: http://arxiv.org/abs/2311.01448v1
- Date: Thu, 2 Nov 2023 17:57:03 GMT
- Title: UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation
- Authors: Yuwen Xiong, Wei-Chiu Ma, Jingkang Wang, Raquel Urtasun
- Abstract summary: We present UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR generation, and LiDAR manipulation.
We show that by aligning the representation of a sparse point cloud to that of a dense point cloud, we can densify the sparse point clouds.
By learning a prior over the discrete codebook, we can generate diverse, realistic LiDAR point clouds for self-driving.
- Score: 51.443788294845845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR provides accurate geometric measurements of the 3D world.
Unfortunately, dense LiDARs are very expensive and the point clouds captured by
low-beam LiDAR are often sparse. To address these issues, we present
UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR
generation, and LiDAR manipulation. The crux of UltraLiDAR is a compact,
discrete representation that encodes the point cloud's geometric structure, is
robust to noise, and is easy to manipulate. We show that by aligning the
representation of a sparse point cloud to that of a dense point cloud, we can
densify the sparse point clouds as if they were captured by a real high-density
LiDAR, drastically reducing the cost. Furthermore, by learning a prior over the
discrete codebook, we can generate diverse, realistic LiDAR point clouds for
self-driving. We evaluate the effectiveness of UltraLiDAR on sparse-to-dense
LiDAR completion and LiDAR generation. Experiments show that densifying
real-world point clouds with our approach can significantly improve the
performance of downstream perception systems. Compared to prior art on LiDAR
generation, our approach generates much more realistic point clouds. According
to A/B test, over 98.5\% of the time human participants prefer our results over
those of previous methods.
Related papers
- LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Towards Realistic Scene Generation with LiDAR Diffusion Models [15.487070964070165]
Diffusion models (DMs) excel in photo-realistic image synthesis, but their adaptation to LiDAR scene generation poses a substantial hurdle.
We propose LiDAR Diffusion Models (LiDMs) to generate LiDAR-realistic scenes from a latent space tailored to capture the realism of LiDAR scenes.
Specifically, we introduce curve-wise compression to simulate real-world LiDAR patterns, point-wise coordinate supervision to learn scene geometry, and patch-wise encoding for a full 3D object context.
arXiv Detail & Related papers (2024-03-31T22:18:56Z) - RangeLDM: Fast Realistic LiDAR Point Cloud Generation [12.868053836790194]
We introduce RangeLDM, a novel approach for rapidly generating high-quality range-view LiDAR point clouds.
We achieve this by correcting range-view data distribution for accurate projection from point clouds to range images via Hough voting.
We instruct the model to preserve 3D structural fidelity by devising a range-guided discriminator.
arXiv Detail & Related papers (2024-03-15T08:19:57Z) - LiDAR View Synthesis for Robust Vehicle Navigation Without Expert Labels [50.40632021583213]
We propose synthesizing additional LiDAR point clouds from novel viewpoints without physically driving at dangerous positions.
We train a deep learning model, which takes a LiDAR scan as input and predicts the future trajectory as output.
A waypoint controller is then applied to this predicted trajectory to determine the throttle and steering labels of the ego-vehicle.
arXiv Detail & Related papers (2023-08-02T20:46:43Z) - Detecting the Anomalies in LiDAR Pointcloud [8.827947115933942]
Adverse weather conditions may cause the LiDAR to produce pointcloud with abnormal patterns such as scattered noise points and uncommon intensity values.
We propose a novel approach to detect whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud characteristics.
arXiv Detail & Related papers (2023-07-31T22:53:42Z) - NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance
Fields [20.887421720818892]
We present NeRF-LIDAR, a novel LiDAR simulation method that leverages real-world information to generate realistic LIDAR point clouds.
We verify the effectiveness of our NeRF-LiDAR by training different 3D segmentation models on the generated LiDAR point clouds.
arXiv Detail & Related papers (2023-04-28T12:41:28Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Learning to Generate Realistic LiDAR Point Clouds [15.976199637414886]
LiDARGen is a novel, effective, and controllable generative model that produces realistic LiDAR point cloud sensory readings.
We validate our method on the challenging KITTI-360 and NuScenes datasets.
arXiv Detail & Related papers (2022-09-08T17:58:04Z) - LaserMix for Semi-Supervised LiDAR Semantic Segmentation [56.73779694312137]
We study the underexplored semi-supervised learning (SSL) in LiDAR segmentation.
Our core idea is to leverage the strong spatial cues of LiDAR point clouds to better exploit unlabeled data.
We propose LaserMix to mix laser beams from different LiDAR scans, and then encourage the model to make consistent and confident predictions.
arXiv Detail & Related papers (2022-06-30T18:00:04Z) - LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
Detection [96.63947479020631]
In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets.
We propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
arXiv Detail & Related papers (2022-03-28T17:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.