SimULi: Real-Time LiDAR and Camera Simulation with Unscented Transforms
- URL: http://arxiv.org/abs/2510.12901v2
- Date: Thu, 16 Oct 2025 04:37:37 GMT
- Title: SimULi: Real-Time LiDAR and Camera Simulation with Unscented Transforms
- Authors: Haithem Turki, Qi Wu, Xin Kang, Janick Martinez Esturo, Shengyu Huang, Ruilong Li, Zan Gojcic, Riccardo de Lutio,
- Abstract summary: SimULi is the first method capable of rendering arbitrary camera models and LiDAR data in real-time.<n>Our method extends 3DGUT, which supports complex camera models, with LiDAR support, via an automated tiling strategy.<n>SimULi renders 10-20x faster than ray tracing approaches and 1.5-10x faster than priorization-based work.
- Score: 31.853574316696037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rigorous testing of autonomous robots, such as self-driving vehicles, is essential to ensure their safety in real-world deployments. This requires building high-fidelity simulators to test scenarios beyond those that can be safely or exhaustively collected in the real-world. Existing neural rendering methods based on NeRF and 3DGS hold promise but suffer from low rendering speeds or can only render pinhole camera models, hindering their suitability to applications that commonly require high-distortion lenses and LiDAR data. Multi-sensor simulation poses additional challenges as existing methods handle cross-sensor inconsistencies by favoring the quality of one modality at the expense of others. To overcome these limitations, we propose SimULi, the first method capable of rendering arbitrary camera models and LiDAR data in real-time. Our method extends 3DGUT, which natively supports complex camera models, with LiDAR support, via an automated tiling strategy for arbitrary spinning LiDAR models and ray-based culling. To address cross-sensor inconsistencies, we design a factorized 3D Gaussian representation and anchoring strategy that reduces mean camera and depth error by up to 40% compared to existing methods. SimULi renders 10-20x faster than ray tracing approaches and 1.5-10x faster than prior rasterization-based work (and handles a wider range of camera models). When evaluated on two widely benchmarked autonomous driving datasets, SimULi matches or exceeds the fidelity of existing state-of-the-art methods across numerous camera and LiDAR metrics.
Related papers
- DiffusionHarmonizer: Bridging Neural Reconstruction and Photorealistic Simulation with Online Diffusion Enhancer [62.18680935878919]
We introduce DiffusionHarmonizer, an online generative enhancement framework that transforms renderings into temporally consistent outputs.<n>At its core is a single-step temporally-conditioned enhancer capable of running in online simulators on a single GPU.
arXiv Detail & Related papers (2026-02-27T15:35:30Z) - SaLF: Sparse Local Fields for Multi-Sensor Rendering in Real-Time [35.22650337397612]
We present Sparse Local Fields (SaLF), a novel representation that supports volumetricization and raytracing.<n>SaLF has fast training (30 min) and rendering capabilities (50+ FPS for camera and 600+ FPS LiDAR), has adaptive pruning and densification to easily handle large scenes, and can support non-pinhole cameras and spinning LiDARs.
arXiv Detail & Related papers (2025-07-24T18:01:22Z) - Lightweight LiDAR-Camera 3D Dynamic Object Detection and Multi-Class Trajectory Prediction [7.415417400188903]
Service mobile robots are often required to avoid dynamic objects while performing their tasks.<n>We present a lightweight multi-modal framework for 3D object detection and trajectory prediction.<n>Our system integrates LiDAR and camera inputs to achieve real-time perception of pedestrians, vehicles, and riders in 3D space.
arXiv Detail & Related papers (2025-04-18T11:59:34Z) - JiSAM: Alleviate Labeling Burden and Corner Case Problems in Autonomous Driving via Minimal Real-World Data [49.2298619289506]
We propose a plug-and-play method called JiSAM, shorthand for Jittering augmentation, domain-aware backbone and memory-based Sectorized AlignMent.<n>In extensive experiments conducted on the famous AD dataset NuScenes, we demonstrate that, with SOTA 3D object detector, JiSAM is able to utilize the simulation data and only labels on 2.5% available real data to achieve comparable performance to models trained on all real data.
arXiv Detail & Related papers (2025-03-11T13:35:39Z) - SplatAD: Real-Time Lidar and Camera Rendering with 3D Gaussian Splatting for Autonomous Driving [6.221538885604869]
Existing neural radiance field (NeRF) methods for sensor-realistic rendering of camera and lidar data suffer from low rendering speeds.<n>We propose SplatAD, the first 3DGS-based method for realistic, real-time rendering of dynamic scenes for both camera and lidar data.
arXiv Detail & Related papers (2024-11-25T16:18:22Z) - Digital twins to alleviate the need for real field data in vision-based vehicle speed detection systems [0.9899633398596672]
Accurate vision-based speed estimation is more cost-effective than traditional methods based on radar or LiDAR.
Deep learning approaches are very limited in this context due to the lack of available data.
In this work, we propose the use of digital-twins using CARLA simulator to generate a large dataset representative of a specific real-world camera.
arXiv Detail & Related papers (2024-07-11T10:41:20Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Recovering and Simulating Pedestrians in the Wild [81.38135735146015]
We propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around.
We incorporate the reconstructed pedestrian assets bank in a realistic 3D simulation system.
We show that the simulated LiDAR data can be used to significantly reduce the amount of real-world data required for visual perception tasks.
arXiv Detail & Related papers (2020-11-16T17:16:32Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.