GSCache: Real-Time Radiance Caching for Volume Path Tracing using 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2507.19718v2
- Date: Sat, 02 Aug 2025 23:59:11 GMT
- Title: GSCache: Real-Time Radiance Caching for Volume Path Tracing using 3D Gaussian Splatting
- Authors: David Bauer, Qi Wu, Hamid Gadirov, Kwan-Liu Ma,
- Abstract summary: In scientific visualization, volume rendering plays a crucial role in helping researchers analyze and interpret complex 3D data.<n>One of the most prominent issues is slow rendering performance and high pixel variance caused by Monte Carlo integration.<n>We introduce a novel radiance caching approach for path-traced volume rendering.
- Score: 26.663435131370264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-time path tracing is rapidly becoming the standard for rendering in entertainment and professional applications. In scientific visualization, volume rendering plays a crucial role in helping researchers analyze and interpret complex 3D data. Recently, photorealistic rendering techniques have gained popularity in scientific visualization, yet they face significant challenges. One of the most prominent issues is slow rendering performance and high pixel variance caused by Monte Carlo integration. In this work, we introduce a novel radiance caching approach for path-traced volume rendering. Our method leverages advances in volumetric scene representation and adapts 3D Gaussian splatting to function as a multi-level, path-space radiance cache. This cache is designed to be trainable on the fly, dynamically adapting to changes in scene parameters such as lighting configurations and transfer functions. By incorporating our cache, we achieve less noisy, higher-quality images without increasing rendering costs. To evaluate our approach, we compare it against a baseline path tracer that supports uniform sampling and next-event estimation and the state-of-the-art for neural radiance caching. Through both quantitative and qualitative analyses, we demonstrate that our path-space radiance cache is a robust solution that is easy to integrate and significantly enhances the rendering quality of volumetric visualization applications while maintaining comparable computational efficiency.
Related papers
- Virtual Memory for 3D Gaussian Splatting [1.278093617645299]
Gaussian Splatting represents a breakthrough in the field of novel view rendering.<n>Recent advances have increased the size of Splatting scenes that can be created.
arXiv Detail & Related papers (2025-06-24T08:31:33Z) - LODGE: Level-of-Detail Large-Scale Gaussian Splatting with Efficient Rendering [68.93333348474988]
We present a novel level-of-detail (LOD) method for 3D Gaussian Splatting on memory-constrained devices.<n>Our approach iteratively selects optimal subsets of Gaussians based on camera distance.<n>Our method achieves state-of-the-art performance on both outdoor (Hierarchical 3DGS) and indoor (Zip-NeRF) datasets.
arXiv Detail & Related papers (2025-05-29T06:50:57Z) - Decoupling Appearance Variations with 3D Consistent Features in Gaussian Splatting [50.98884579463359]
We propose DAVIGS, a method that decouples appearance variations in a plug-and-play manner.<n>By transforming the rendering results at the image level instead of the Gaussian level, our approach can model appearance variations with minimal optimization time and memory overhead.<n>We validate our method on several appearance-variant scenes, and demonstrate that it achieves state-of-the-art rendering quality with minimal training time and memory usage.
arXiv Detail & Related papers (2025-01-18T14:55:58Z) - Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse Primitives [60.217580865237835]
3D Gaussian Splatting (3D-GS) is a recent 3D scene reconstruction technique that enables real-time rendering of novel views by modeling scenes as parametric point clouds of differentiable 3D Gaussians.<n>We identify and address two key inefficiencies in 3D-GS to substantially improve rendering speed.<n>Our Speedy-Splat approach combines these techniques to accelerate average rendering speed by a drastic $mathit6.71times$ across scenes from the Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets.
arXiv Detail & Related papers (2024-11-30T20:25:56Z) - Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering [62.92985004295714]
We present a method that avoids approximations that introduce bias into the renderings and, more importantly, the gradients used for optimization.
We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular reflections.
arXiv Detail & Related papers (2024-09-09T17:59:57Z) - Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields [13.729716867839509]
We propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance.
In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2024-08-07T14:56:34Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Real-time Neural Radiance Caching for Path Tracing [67.46991813306708]
We present a real-time neural radiance caching method for path-traced global illumination.
Our system is designed to handle fully dynamic scenes, and makes no assumptions about the lighting, geometry, and materials.
We demonstrate significant noise reduction at the cost of little induced bias, and report state-of-the-art, real-time performance on a number of challenging scenarios.
arXiv Detail & Related papers (2021-06-23T13:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.