R3GW: Relightable 3D Gaussians for Outdoor Scenes in the Wild
- URL: http://arxiv.org/abs/2603.02801v1
- Date: Tue, 03 Mar 2026 09:40:16 GMT
- Title: R3GW: Relightable 3D Gaussians for Outdoor Scenes in the Wild
- Authors: Margherita Lea Corona, Wieland Morgenstern, Peter Eisert, Anna Hilsmann,
- Abstract summary: 3D Gaussian Splatting (3DGS) has established itself as a leading technique for 3D reconstruction and novel view synthesis of static scenes.<n>We present R3GW, a novel method that learns a relightable 3DGS representation of an outdoor scene captured in the wild.
- Score: 23.68389428693905
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D Gaussian Splatting (3DGS) has established itself as a leading technique for 3D reconstruction and novel view synthesis of static scenes, achieving outstanding rendering quality and fast training. However, the method does not explicitly model the scene illumination, making it unsuitable for relighting tasks. Furthermore, 3DGS struggles to reconstruct scenes captured in the wild by unconstrained photo collections featuring changing lighting conditions. In this paper, we present R3GW, a novel method that learns a relightable 3DGS representation of an outdoor scene captured in the wild. Our approach separates the scene into a relightable foreground and a non-reflective background (the sky), using two distinct sets of Gaussians. R3GW models view-dependent lighting effects in the foreground reflections by combining Physically Based Rendering with the 3DGS scene representation in a varying illumination setting. We evaluate our method quantitatively and qualitatively on the NeRF-OSR dataset, offering state-of-the-art performance and enhanced support for physically-based relighting of unconstrained scenes. Our method synthesizes photorealistic novel views under arbitrary illumination conditions. Additionally, our representation of the sky mitigates depth reconstruction artifacts, improving rendering quality at the sky-foreground boundary
Related papers
- Lumos3D: A Single-Forward Framework for Low-Light 3D Scene Restoration [10.184395697154448]
We introduce Lumos3D, a pose-free framework for 3D low-light scene restoration.<n>Built upon a geometry-grounded backbone, Lumos3D reconstructs a normal-light 3D Gaussian representation.<n>Experiments on real-world datasets demonstrate that Lumos3D achieves high- fidelity low-light 3D scene restoration.
arXiv Detail & Related papers (2025-11-12T23:42:03Z) - ComGS: Efficient 3D Object-Scene Composition via Surface Octahedral Probes [46.83857963152283]
Gaussian Splatting (GS) enables immersive rendering, but realistic 3D object-scene composition remains challenging.<n>We propose ComGS, a novel 3D object-scene composition framework.<n>Our method achieves high-quality, real-time rendering at around 28 FPS, produces visually harmonious results with vivid shadows, and requires only 36 seconds for editing.
arXiv Detail & Related papers (2025-10-09T03:10:41Z) - Visibility-Uncertainty-guided 3D Gaussian Inpainting via Scene Conceptional Learning [63.94919846010485]
3D Gaussian inpainting (3DGI) is challenging in effectively leveraging complementary visual and semantic cues from multiple input views.<n>We propose a method that measures the visibility uncertainties of 3D points across different input views and uses them to guide 3DGI.<n>We build a novel 3DGI framework, VISTA, by integrating VISibility-uncerTainty-guided 3DGI with scene conceptuAl learning.
arXiv Detail & Related papers (2025-04-23T06:21:11Z) - D3DR: Lighting-Aware Object Insertion in Gaussian Splatting [48.80431740983095]
We propose a method, dubbed D3DR, for inserting a 3DGS-parametrized object into 3DGS scenes.<n>We leverage advances in diffusion models, which, trained on real-world data, implicitly understand correct scene lighting.<n>We demonstrate the method's effectiveness by comparing it to existing approaches.
arXiv Detail & Related papers (2025-03-09T19:48:00Z) - LumiGauss: Relightable Gaussian Splatting in the Wild [15.11759492990967]
We introduce LumiGauss - a technique that tackles 3D reconstruction of scenes and environmental lighting through 2D Gaussian Splatting.<n>Our approach yields high-quality scene reconstructions and enables realistic lighting synthesis under novel environment maps.<n>We validate our method on the NeRF-OSR dataset, demonstrating superior performance over baseline methods.
arXiv Detail & Related papers (2024-08-06T23:41:57Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - PhotoScene: Photorealistic Material and Lighting Transfer for Indoor
Scenes [84.66946637534089]
PhotoScene is a framework that takes input image(s) of a scene and builds a photorealistic digital twin with high-quality materials and similar lighting.
We model scene materials using procedural material graphs; such graphs represent photorealistic and resolution-independent materials.
We evaluate our technique on objects and layout reconstructions from ScanNet, SUN RGB-D and stock photographs, and demonstrate that our method reconstructs high-quality, fully relightable 3D scenes.
arXiv Detail & Related papers (2022-07-02T06:52:44Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.