Nighttime Autonomous Driving Scene Reconstruction with Physically-Based Gaussian Splatting
- URL: http://arxiv.org/abs/2602.13549v1
- Date: Sat, 14 Feb 2026 01:49:23 GMT
- Title: Nighttime Autonomous Driving Scene Reconstruction with Physically-Based Gaussian Splatting
- Authors: Tae-Kyeong Kim, Xingxin Chen, Guile Wu, Chengjie Huang, Dongfeng Bai, Bingbing Liu,
- Abstract summary: We present a novel approach that integrates physically based rendering into 3DGS to enhance nighttime scene reconstruction for autonomous driving.<n>Our approach improves reconstruction quality for outdoor nighttime driving scenes, while maintaining real-time rendering.
- Score: 19.61590675458685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on scene reconstruction under nighttime conditions in autonomous driving simulation. Recent methods based on Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS) have achieved photorealistic modeling in autonomous driving scene reconstruction, but they primarily focus on normal-light conditions. Low-light driving scenes are more challenging to model due to their complex lighting and appearance conditions, which often causes performance degradation of existing methods. To address this problem, this work presents a novel approach that integrates physically based rendering into 3DGS to enhance nighttime scene reconstruction for autonomous driving. Specifically, our approach integrates physically based rendering into composite scene Gaussian representations and jointly optimizes Bidirectional Reflectance Distribution Function (BRDF) based material properties. We explicitly model diffuse components through a global illumination module and specular components by anisotropic spherical Gaussians. As a result, our approach improves reconstruction quality for outdoor nighttime driving scenes, while maintaining real-time rendering. Extensive experiments across diverse nighttime scenarios on two real-world autonomous driving datasets, including nuScenes and Waymo, demonstrate that our approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
Related papers
- DrivingGaussian++: Towards Realistic Reconstruction and Editable Simulation for Surrounding Dynamic Driving Scenes [49.23098808629567]
DrivingGaussian++ is an efficient framework for realistic reconstructing and controllable editing of autonomous driving scenes.<n>It supports training-free controllable editing for dynamic driving scenes, including texture modification, weather simulation, and object manipulation.<n>Our method can automatically generate dynamic object motion trajectories and enhance their realism during the optimization process.
arXiv Detail & Related papers (2025-08-28T16:22:54Z) - ArmGS: Composite Gaussian Appearance Refinement for Modeling Dynamic Urban Environments [22.371417505012566]
This work focuses on modeling dynamic urban environments for autonomous driving simulation.<n>We propose a new approach named ArmGS that exploits composite driving Gaussian splatting with multi-granularity appearance refinement.<n>This not only models global scene appearance variations between frames and camera viewpoints, but also models local fine-grained photorealistic changes of background and objects.
arXiv Detail & Related papers (2025-07-05T03:54:40Z) - Hybrid Rendering for Multimodal Autonomous Driving: Merging Neural and Physics-Based Simulation [1.0027737736304287]
We introduce a hybrid approach that combines the strengths of neural reconstruction with physics-based rendering.<n>Our approach significantly enhances novel view synthesis quality, especially for road surfaces and lane markings.<n>We achieve this by training a customized NeRF model on the original images with depth regularization derived from a noisy LiDAR point cloud.
arXiv Detail & Related papers (2025-03-12T15:18:50Z) - NPSim: Nighttime Photorealistic Simulation From Daytime Images With Monocular Inverse Rendering and Ray Tracing [0.5439020425819]
A powerful autonomous driving system should be capable of handling images under all conditions, including nighttime.<n>We introduce a novel approach named NPSim, which enables the simulation of realistic nighttime images.<n> NPSim comprises two key components: mesh reconstruction and relighting.
arXiv Detail & Related papers (2025-02-15T08:24:19Z) - BEAM: Bridging Physically-based Rendering and Gaussian Modeling for Relightable Volumetric Video [58.97416204208624]
We present BEAM, a novel pipeline that bridges 4D Gaussian representations with physically-based rendering (PBR) to produce high-quality, relightable videos.<n>By offering realistic, lifelike visualizations under diverse lighting conditions, BEAM opens new possibilities for interactive entertainment, storytelling, and creative visualization.
arXiv Detail & Related papers (2025-02-12T10:58:09Z) - EnvGS: Modeling View-Dependent Appearance with Environment Gaussian [78.74634059559891]
EnvGS is a novel approach that employs a set of Gaussian primitives as an explicit 3D representation for capturing reflections of environments.<n>To efficiently render these environment Gaussian primitives, we developed a ray-tracing-based reflection that leverages the GPU's RT core for fast rendering.<n>Results from multiple real-world and synthetic datasets demonstrate that our method produces significantly more detailed reflections.
arXiv Detail & Related papers (2024-12-19T18:59:57Z) - AutoSplat: Constrained Gaussian Splatting for Autonomous Driving Scene Reconstruction [17.600027937450342]
AutoSplat is a framework employing Gaussian splatting to achieve highly realistic reconstructions of autonomous driving scenes.
Our method enables multi-view consistent simulation of challenging scenarios including lane changes.
arXiv Detail & Related papers (2024-07-02T18:36:50Z) - LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes [73.65115834242866]
Photorealistic simulation plays a crucial role in applications such as autonomous driving.
However, reconstruction quality suffers on street scenes due to collinear camera motions and sparser samplings at higher speeds.
We propose several insights that allow a better utilization of Lidar data to improve NeRF quality on street scenes.
arXiv Detail & Related papers (2024-05-01T23:07:12Z) - Street Gaussians: Modeling Dynamic Urban Scenes with Gaussian Splatting [32.59889755381453]
Recent methods extend NeRF by incorporating tracked vehicle poses to animate vehicles, enabling photo-realistic view of dynamic urban street scenes.
We introduce Street Gaussians, a new explicit scene representation that tackles these limitations.
The proposed method consistently outperforms state-of-the-art methods across all datasets.
arXiv Detail & Related papers (2024-01-02T18:59:55Z) - DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes [57.12439406121721]
We present DrivingGaussian, an efficient and effective framework for surrounding dynamic autonomous driving scenes.
For complex scenes with moving objects, we first sequentially and progressively model the static background of the entire scene.
We then leverage a composite dynamic Gaussian graph to handle multiple moving objects.
We further use a LiDAR prior for Gaussian Splatting to reconstruct scenes with greater details and maintain panoramic consistency.
arXiv Detail & Related papers (2023-12-13T06:30:51Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.