LiDARDraft: Generating LiDAR Point Cloud from Versatile Inputs
- URL: http://arxiv.org/abs/2512.20105v1
- Date: Tue, 23 Dec 2025 07:03:31 GMT
- Title: LiDARDraft: Generating LiDAR Point Cloud from Versatile Inputs
- Authors: Haiyun Wei, Fan Lu, Yunwei Zhu, Zehan Zheng, Weiyi Xue, Lin Shao, Xudong Zhang, Ya Wu, Rong Fu, Guang Chen,
- Abstract summary: We propose LiDARDraft to generate realistic and diverse LiDAR point clouds.<n>The 3D layout can be trivially generated from various user inputs.<n>We employ a rangemap-based ControlNet to guide LiDAR point cloud generation.
- Score: 16.062937048950946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating realistic and diverse LiDAR point clouds is crucial for autonomous driving simulation. Although previous methods achieve LiDAR point cloud generation from user inputs, they struggle to attain high-quality results while enabling versatile controllability, due to the imbalance between the complex distribution of LiDAR point clouds and the simple control signals. To address the limitation, we propose LiDARDraft, which utilizes the 3D layout to build a bridge between versatile conditional signals and LiDAR point clouds. The 3D layout can be trivially generated from various user inputs such as textual descriptions and images. Specifically, we represent text, images, and point clouds as unified 3D layouts, which are further transformed into semantic and depth control signals. Then, we employ a rangemap-based ControlNet to guide LiDAR point cloud generation. This pixel-level alignment approach demonstrates excellent performance in controllable LiDAR point clouds generation, enabling "simulation from scratch", allowing self-driving environments to be created from arbitrary textual descriptions, images and sketches.
Related papers
- PVNet: Point-Voxel Interaction LiDAR Scene Upsampling Via Diffusion Models [57.02789948234898]
We propose PVNet, a diffusion model-based point-voxel interaction framework to perform LiDAR point cloud upsampling without dense supervision.<n>Specifically, we employ a sparse point cloud as the guiding condition and the synthesized point clouds derived from its nearby frames as the input.<n>In addition, we propose a point-voxel interaction module to integrate features from both points and voxels, which efficiently improves the environmental perception capability of each upsampled point.
arXiv Detail & Related papers (2025-08-23T14:55:03Z) - TexLiDAR: Automated Text Understanding for Panoramic LiDAR Data [0.6144680854063939]
Efforts to connect LiDAR data with text, such as LidarCLIP, have primarily focused on embedding 3D point clouds into CLIP text-image space.<n>We propose an alternative approach to connect LiDAR data with text by leveraging 2D imagery generated by the OS1 sensor instead of 3D point clouds.
arXiv Detail & Related papers (2025-02-05T19:41:06Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [53.58528891081709]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.<n>We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.<n>We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - TULIP: Transformer for Upsampling of LiDAR Point Clouds [32.77657816997911]
LiDAR Up is a challenging task for the perception systems of robots and autonomous vehicles.
Recent works propose to solve this problem by converting LiDAR data from 3D Euclidean space into an image super-resolution problem in 2D image space.
We propose T geometries, a new method to reconstruct high-resolution LiDAR point clouds from low-resolution LiDAR input.
arXiv Detail & Related papers (2023-12-11T10:43:28Z) - UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation [51.443788294845845]
We present UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR generation, and LiDAR manipulation.
We show that by aligning the representation of a sparse point cloud to that of a dense point cloud, we can densify the sparse point clouds.
By learning a prior over the discrete codebook, we can generate diverse, realistic LiDAR point clouds for self-driving.
arXiv Detail & Related papers (2023-11-02T17:57:03Z) - Advancements in 3D Lane Detection Using LiDAR Point Clouds: From Data Collection to Model Development [10.78971892551972]
LiSV-3DLane is a large-scale 3D lane dataset that comprises 20k frames of surround-view LiDAR point clouds with enriched semantic annotation.
We propose a novel LiDAR-based 3D lane detection model, LiLaDet, incorporating the spatial geometry learning of the LiDAR point cloud into Bird's Eye View (BEV) based lane identification.
arXiv Detail & Related papers (2023-09-24T09:58:49Z) - NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance
Fields [20.887421720818892]
We present NeRF-LIDAR, a novel LiDAR simulation method that leverages real-world information to generate realistic LIDAR point clouds.
We verify the effectiveness of our NeRF-LiDAR by training different 3D segmentation models on the generated LiDAR point clouds.
arXiv Detail & Related papers (2023-04-28T12:41:28Z) - PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds [100.03877236181546]
PolarMix is a point cloud augmentation technique that is simple and generic.
It can work as plug-and-play for various 3D deep architectures and also performs well for unsupervised domain adaptation.
arXiv Detail & Related papers (2022-07-30T13:52:19Z) - Lateral Ego-Vehicle Control without Supervision using Point Clouds [50.40632021583213]
Existing vision based supervised approaches to lateral vehicle control are capable of directly mapping RGB images to the appropriate steering commands.
This paper proposes a framework for training a more robust and scalable model for lateral vehicle control.
Online experiments show that the performance of our method is superior to that of the supervised model.
arXiv Detail & Related papers (2022-03-20T21:57:32Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.