La La LiDAR: Large-Scale Layout Generation from LiDAR Data
- URL: http://arxiv.org/abs/2508.03691v1
- Date: Tue, 05 Aug 2025 17:59:55 GMT
- Title: La La LiDAR: Large-Scale Layout Generation from LiDAR Data
- Authors: Youquan Liu, Lingdong Kong, Weidong Yang, Xin Li, Ao Liang, Runnan Chen, Ben Fei, Tongliang Liu,
- Abstract summary: Controllable generation of realistic LiDAR scenes is crucial for applications such as autonomous driving and robotics.<n>We propose Large-scale Layout-guided LiDAR generation model ("La La LiDAR"), a novel layout-guided generative framework.<n>La La LiDAR achieves state-of-the-art performance in both LiDAR generation and downstream perception tasks.
- Score: 45.5317990948996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controllable generation of realistic LiDAR scenes is crucial for applications such as autonomous driving and robotics. While recent diffusion-based models achieve high-fidelity LiDAR generation, they lack explicit control over foreground objects and spatial relationships, limiting their usefulness for scenario simulation and safety validation. To address these limitations, we propose Large-scale Layout-guided LiDAR generation model ("La La LiDAR"), a novel layout-guided generative framework that introduces semantic-enhanced scene graph diffusion with relation-aware contextual conditioning for structured LiDAR layout generation, followed by foreground-aware control injection for complete scene generation. This enables customizable control over object placement while ensuring spatial and semantic consistency. To support our structured LiDAR generation, we introduce Waymo-SG and nuScenes-SG, two large-scale LiDAR scene graph datasets, along with new evaluation metrics for layout synthesis. Extensive experiments demonstrate that La La LiDAR achieves state-of-the-art performance in both LiDAR generation and downstream perception tasks, establishing a new benchmark for controllable 3D scene generation.
Related papers
- LiDARCrafter: Dynamic 4D World Modeling from LiDAR Sequences [10.426609103049572]
LiDARCrafter is a unified framework for 4D LiDAR generation and editing.<n>It achieves state-of-the-art performance in fidelity, controllability, and temporal consistency across all levels.<n>The code and benchmark are released to the community.
arXiv Detail & Related papers (2025-08-05T17:59:56Z) - SPIRAL: Semantic-Aware Progressive LiDAR Scene Generation [10.77777607732642]
Spiral is a novel range-view LiDAR diffusion model that simultaneously generates depth, reflectance images, and semantic maps.<n> Experiments on the Semantic KITTI and nuScenes datasets demonstrate that Spiral achieves state-of-the-art performance with the smallest parameter size.
arXiv Detail & Related papers (2025-05-28T17:55:35Z) - OLiDM: Object-aware LiDAR Diffusion Models for Autonomous Driving [74.06413946934002]
We introduce OLiDM, a novel framework capable of generating high-fidelity LiDAR data at both the object and the scene levels.<n>OLiDM consists of two pivotal components: the Object-Scene Progressive Generation (OPG) module and the Object Semantic Alignment (OSA) module.<n>OPG adapts to user-specific prompts to generate desired foreground objects, which are subsequently employed as conditions in scene generation.<n>OSA aims to rectify the misalignment between foreground objects and background scenes, enhancing the overall quality of the generated objects.
arXiv Detail & Related papers (2024-12-23T02:43:29Z) - LiDAR-EDIT: LiDAR Data Generation by Editing the Object Layouts in Real-World Scenes [1.249418440326334]
We present LiDAR-EDIT, a novel paradigm for generating synthetic LiDAR data for autonomous driving.<n>Our framework edits real-world LiDAR scans by introducing new object layouts while preserving the realism of the background environment.<n>Compared to end-to-end frameworks that generate LiDAR point clouds from scratch, LiDAR-EDIT offers users full control over the object layout.
arXiv Detail & Related papers (2024-11-30T21:39:51Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.<n>Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.<n>This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-form
Layout-to-Image Generation [68.42476385214785]
We propose a novel Spatial-Semantic Map Guided (SSMG) diffusion model that adopts the feature map, derived from the layout, as guidance.
SSMG achieves superior generation quality with sufficient spatial and semantic controllability compared to previous works.
We also propose the Relation-Sensitive Attention (RSA) and Location-Sensitive Attention (LSA) mechanisms.
arXiv Detail & Related papers (2023-08-20T04:09:12Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.