TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians
- URL: http://arxiv.org/abs/2504.18768v2
- Date: Thu, 01 May 2025 07:57:07 GMT
- Title: TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians
- Authors: Letian Huang, Dongwei Ye, Jialin Dan, Chengzhi Tao, Huiwen Liu, Kun Zhou, Bo Ren, Yuanqi Li, Yanwen Guo, Jie Guo,
- Abstract summary: We propose TransparentGS, a fast inverse rendering pipeline for transparent objects based on 3D-GS.<n>We leverage Gaussian light field probes (GaussProbe) to encode both ambient light and nearby contents in a unified framework.<n> Experiments demonstrate the speed and accuracy of our approach in recovering transparent objects from complex environments.
- Score: 35.444290579981455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of neural and Gaussian-based radiance field methods has led to considerable advancements in novel view synthesis and 3D object reconstruction. Nonetheless, specular reflection and refraction continue to pose significant challenges due to the instability and incorrect overfitting of radiance fields to high-frequency light variations. Currently, even 3D Gaussian Splatting (3D-GS), as a powerful and efficient tool, falls short in recovering transparent objects with nearby contents due to the existence of apparent secondary ray effects. To address this issue, we propose TransparentGS, a fast inverse rendering pipeline for transparent objects based on 3D-GS. The main contributions are three-fold. Firstly, an efficient representation of transparent objects, transparent Gaussian primitives, is designed to enable specular refraction through a deferred refraction strategy. Secondly, we leverage Gaussian light field probes (GaussProbe) to encode both ambient light and nearby contents in a unified framework. Thirdly, a depth-based iterative probes query (IterQuery) algorithm is proposed to reduce the parallax errors in our probe-based framework. Experiments demonstrate the speed and accuracy of our approach in recovering transparent objects from complex environments, as well as several applications in computer graphics and vision.
Related papers
- TSGS: Improving Gaussian Splatting for Transparent Surface Reconstruction via Normal and De-lighting Priors [39.60777069381983]
We introduce Transparent Surface Gaussian Splatting (TSGS), a new framework that separates geometry learning from appearance refinement.
In the geometry learning stage, TSGS focuses on geometry by using specular-suppressed inputs to accurately represent surfaces.
To enhance depth inference, TSGS employs a first-surface depth extraction method.
arXiv Detail & Related papers (2025-04-17T10:00:09Z) - GlossGau: Efficient Inverse Rendering for Glossy Surface with Anisotropic Spherical Gaussian [4.5442067197725]
GlossGau is an efficient inverse rendering framework that reconstructs scenes with glossy surfaces while maintaining training and rendering speeds comparable to vanilla 3D-GS.
Experiments demonstrate that GlossGau achieves competitive or superior reconstruction on datasets with glossy surfaces.
arXiv Detail & Related papers (2025-02-19T22:20:57Z) - GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering [83.69136534797686]
We present GUS-IR, a novel framework designed to address the inverse rendering problem for complicated scenes featuring rough and glossy surfaces.
This paper starts by analyzing and comparing two prominent shading techniques popularly used for inverse rendering, forward shading and deferred shading.
We propose a unified shading solution that combines the advantages of both techniques for better decomposition.
arXiv Detail & Related papers (2024-11-12T01:51:05Z) - RNG: Relightable Neural Gaussians [19.197099019727826]
We propose a novel 3DGS-based framework that enables the relighting of objects with both hard surfaces or soft boundaries.<n>We also introduce a shadow cue, as well as a depth refinement network to improve shadow accuracy.<n>Our method achieves significantly faster training (1.3 hours) and rendering (60 frames per second) compared to a prior method.
arXiv Detail & Related papers (2024-09-29T13:32:24Z) - Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering [62.92985004295714]
We present a method that avoids approximations that introduce bias into the renderings and, more importantly, the gradients used for optimization.
We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular reflections.
arXiv Detail & Related papers (2024-09-09T17:59:57Z) - Subsurface Scattering for 3D Gaussian Splatting [10.990813043493642]
3D reconstruction and relighting of objects made from scattering materials present a significant challenge due to the complex light transport beneath the surface.
We propose a framework for optimizing an object's shape together with the radiance transfer field given multi-view OLAT (one light at a time) data.
Our approach enables material editing, relighting and novel view synthesis at interactive rates.
arXiv Detail & Related papers (2024-08-22T10:34:01Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Neural Radiance Fields for Transparent Object Using Visual Hull [0.8158530638728501]
Recently introduced Neural Radiance Fields (NeRF) is a view synthesis method.
We propose a NeRF-based method consisting of the following three steps: First, we reconstruct a three-dimensional shape of a transparent object using visual hull.
Second, we simulate the refraction of the rays inside of the transparent object according to Snell's law. Last, we sample points through refracted rays and put them into NeRF.
arXiv Detail & Related papers (2023-12-13T13:15:19Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - GS-IR: 3D Gaussian Splatting for Inverse Rendering [71.14234327414086]
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS)
We extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions.
The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering.
arXiv Detail & Related papers (2023-11-26T02:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.