Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF
Decomposition and Ray Tracing
- URL: http://arxiv.org/abs/2311.16043v1
- Date: Mon, 27 Nov 2023 18:07:58 GMT
- Title: Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF
Decomposition and Ray Tracing
- Authors: Jian Gao, Chun Gu, Youtian Lin, Hao Zhu, Xun Cao, Li Zhang, Yao Yao
- Abstract summary: We present a differentiable point-based rendering framework for material and lighting decomposition from multi-view images.
This framework enables editing, ray-tracing, and real-time relighting of the 3D point cloud.
Our framework showcases the potential to revolutionize the mesh-based graphics pipeline.
- Score: 18.132915517047632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel differentiable point-based rendering framework for
material and lighting decomposition from multi-view images, enabling editing,
ray-tracing, and real-time relighting of the 3D point cloud. Specifically, a 3D
scene is represented as a set of relightable 3D Gaussian points, where each
point is additionally associated with a normal direction, BRDF parameters, and
incident lights from different directions. To achieve robust lighting
estimation, we further divide incident lights of each point into global and
local components, as well as view-dependent visibilities. The 3D scene is
optimized through the 3D Gaussian Splatting technique while BRDF and lighting
are decomposed by physically-based differentiable rendering. Moreover, we
introduce an innovative point-based ray-tracing approach based on the bounding
volume hierarchy for efficient visibility baking, enabling real-time rendering
and relighting of 3D Gaussian points with accurate shadow effects. Extensive
experiments demonstrate improved BRDF estimation and novel view rendering
results compared to state-of-the-art material estimation approaches. Our
framework showcases the potential to revolutionize the mesh-based graphics
pipeline with a relightable, traceable, and editable rendering pipeline solely
based on point cloud. Project
page:https://nju-3dv.github.io/projects/Relightable3DGaussian/.
Related papers
- RTR-GS: 3D Gaussian Splatting for Inverse Rendering with Radiance Transfer and Reflection [16.81533668816093]
RTR-GS is a novel inverse rendering framework capable of robustly rendering objects with arbitrary reflectance properties, decomposing BRDF and lighting, and delivering credible relighting results.<n>We show that our method enhances novel view synthesis, normal estimation, decomposition, and relighting while maintaining efficient training inference process.
arXiv Detail & Related papers (2025-07-10T13:13:08Z) - SVG-IR: Spatially-Varying Gaussian Splatting for Inverse Rendering [23.104424058660744]
3D Gaussian Splatting (3DGS) has demonstrated impressive capabilities for novel view synthesis (NVS) tasks.
We present a novel framework called Spatially-vayring Gaussian Inverse Rendering (SVG-IR) aimed at enhancing both NVS and relighting quality.
The proposed SVG-IR framework significantly improves rendering quality, outperforming state-of-the-art NeRF-based methods by 2.5 dB in peak signal-to-noise ratio (PSNR) and surpassing existing Gaussian-based techniques by 3.5 dB in relighting tasks.
arXiv Detail & Related papers (2025-04-09T12:11:58Z) - BEAM: Bridging Physically-based Rendering and Gaussian Modeling for Relightable Volumetric Video [44.50599475213118]
Volumetric video enables immersive experiences by capturing dynamic 3D scenes, enabling diverse applications for virtual reality, education, and telepresence.
We present BEAM, a novel pipeline that bridges 4D Gaussian representations with physically-based rendering (PBR) to produce high-quality, relightable videos.
arXiv Detail & Related papers (2025-02-12T10:58:09Z) - GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering [83.69136534797686]
We present GUS-IR, a novel framework designed to address the inverse rendering problem for complicated scenes featuring rough and glossy surfaces.
This paper starts by analyzing and comparing two prominent shading techniques popularly used for inverse rendering, forward shading and deferred shading.
We propose a unified shading solution that combines the advantages of both techniques for better decomposition.
arXiv Detail & Related papers (2024-11-12T01:51:05Z) - GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering [6.820642721852439]
We present GI-GS, a novel inverse rendering framework that leverages 3D Gaussian Splatting (3DGS) and deferred shading.
In our framework, we first render a G-buffer to capture the detailed geometry and material properties of the scene.
With the G-buffer and previous rendering results, the indirect lighting can be calculated through a lightweight path tracing.
arXiv Detail & Related papers (2024-10-03T15:58:18Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading [50.331929164207324]
We introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading.
Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view and editing tasks.
arXiv Detail & Related papers (2024-04-15T01:58:54Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.