ComGS: Efficient 3D Object-Scene Composition via Surface Octahedral Probes
- URL: http://arxiv.org/abs/2510.07729v1
- Date: Thu, 09 Oct 2025 03:10:41 GMT
- Title: ComGS: Efficient 3D Object-Scene Composition via Surface Octahedral Probes
- Authors: Jian Gao, Mengqi Yuan, Yifei Zeng, Chang Zeng, Zhihao Li, Zhenyu Chen, Weichao Qiu, Xiao-Xiao Long, Hao Zhu, Xun Cao, Yao Yao,
- Abstract summary: Gaussian Splatting (GS) enables immersive rendering, but realistic 3D object-scene composition remains challenging.<n>We propose ComGS, a novel 3D object-scene composition framework.<n>Our method achieves high-quality, real-time rendering at around 28 FPS, produces visually harmonious results with vivid shadows, and requires only 36 seconds for editing.
- Score: 46.83857963152283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian Splatting (GS) enables immersive rendering, but realistic 3D object-scene composition remains challenging. Baked appearance and shadow information in GS radiance fields cause inconsistencies when combining objects and scenes. Addressing this requires relightable object reconstruction and scene lighting estimation. For relightable object reconstruction, existing Gaussian-based inverse rendering methods often rely on ray tracing, leading to low efficiency. We introduce Surface Octahedral Probes (SOPs), which store lighting and occlusion information and allow efficient 3D querying via interpolation, avoiding expensive ray tracing. SOPs provide at least a 2x speedup in reconstruction and enable real-time shadow computation in Gaussian scenes. For lighting estimation, existing Gaussian-based inverse rendering methods struggle to model intricate light transport and often fail in complex scenes, while learning-based methods predict lighting from a single image and are viewpoint-sensitive. We observe that 3D object-scene composition primarily concerns the object's appearance and nearby shadows. Thus, we simplify the challenging task of full scene lighting estimation by focusing on the environment lighting at the object's placement. Specifically, we capture a 360 degrees reconstructed radiance field of the scene at the location and fine-tune a diffusion model to complete the lighting. Building on these advances, we propose ComGS, a novel 3D object-scene composition framework. Our method achieves high-quality, real-time rendering at around 28 FPS, produces visually harmonious results with vivid shadows, and requires only 36 seconds for editing. Code and dataset are available at https://nju-3dv.github.io/projects/ComGS/.
Related papers
- R3GW: Relightable 3D Gaussians for Outdoor Scenes in the Wild [23.68389428693905]
3D Gaussian Splatting (3DGS) has established itself as a leading technique for 3D reconstruction and novel view synthesis of static scenes.<n>We present R3GW, a novel method that learns a relightable 3DGS representation of an outdoor scene captured in the wild.
arXiv Detail & Related papers (2026-03-03T09:40:16Z) - EAG-PT: Emission-Aware Gaussians and Path Tracing for Indoor Scene Reconstruction and Editing [63.64931392651423]
We propose Emission-Aware Gaussians and Path Tracing (EAG-PT), aiming for physically based light transport with a unified 2D Gaussian representation.<n>Our design is based on three cores: (1) using 2D Gaussians as a unified scene representation and transport-friendly geometry proxy that avoids reconstructed mesh, (2) explicitly separating emissive and non-emissive components during reconstruction for further scene editing, and (3) decoupling reconstruction from final rendering by using efficient single-bounce optimization and high-quality multi-bounce path tracing after scene editing.
arXiv Detail & Related papers (2026-01-30T15:16:37Z) - 3D Gaussian Inverse Rendering with Approximated Global Illumination [15.899514468603627]
We present a novel approach that enables efficient global illumination for 3D Gaussians Splatting through screen-space ray tracing.<n>Our key insight is that a substantial amount of indirect light can be traced back to surfaces visible within the current view frustum.<n>In experiments, we show that the screen-space approximation we utilize allows for indirect illumination and supports real-time rendering and editing.
arXiv Detail & Related papers (2025-04-02T05:02:25Z) - D3DR: Lighting-Aware Object Insertion in Gaussian Splatting [48.80431740983095]
We propose a method, dubbed D3DR, for inserting a 3DGS-parametrized object into 3DGS scenes.<n>We leverage advances in diffusion models, which, trained on real-world data, implicitly understand correct scene lighting.<n>We demonstrate the method's effectiveness by comparing it to existing approaches.
arXiv Detail & Related papers (2025-03-09T19:48:00Z) - GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering [6.820642721852439]
We present GI-GS, a novel inverse rendering framework that leverages 3D Gaussian Splatting (3DGS) and deferred shading.<n>In our framework, we first render a G-buffer to capture the detailed geometry and material properties of the scene.<n>With the G-buffer and previous rendering results, the indirect lighting can be calculated through a lightweight path tracing.
arXiv Detail & Related papers (2024-10-03T15:58:18Z) - LumiGauss: Relightable Gaussian Splatting in the Wild [15.11759492990967]
We introduce LumiGauss - a technique that tackles 3D reconstruction of scenes and environmental lighting through 2D Gaussian Splatting.<n>Our approach yields high-quality scene reconstructions and enables realistic lighting synthesis under novel environment maps.<n>We validate our method on the NeRF-OSR dataset, demonstrating superior performance over baseline methods.
arXiv Detail & Related papers (2024-08-06T23:41:57Z) - DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading [50.331929164207324]
We introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view and editing tasks.
arXiv Detail & Related papers (2024-04-15T01:58:54Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.