GS-Cache: A GS-Cache Inference Framework for Large-scale Gaussian Splatting Models
- URL: http://arxiv.org/abs/2502.14938v1
- Date: Thu, 20 Feb 2025 14:01:17 GMT
- Title: GS-Cache: A GS-Cache Inference Framework for Large-scale Gaussian Splatting Models
- Authors: Miao Tao, Yuanzhen Zhou, Haoran Xu, Zeyu He, Zhenyu Yang, Yuchang Zhang, Zhongling Su, Linning Xu, Zhenxiang Ma, Rong Fu, Hengjie Li, Xingcheng Zhang, Jidong Zhai,
- Abstract summary: Rendering large-scale 3D Gaussian Splatting (3DGS) model faces significant challenges in achieving real-time, high-fidelity performance on consumer-grade devices.<n>We propose GS-Cache, an end-to-end framework that seamlessly integrates 3DGS's advanced representation with a highly optimized rendering system.
- Score: 23.135271367322034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rendering large-scale 3D Gaussian Splatting (3DGS) model faces significant challenges in achieving real-time, high-fidelity performance on consumer-grade devices. Fully realizing the potential of 3DGS in applications such as virtual reality (VR) requires addressing critical system-level challenges to support real-time, immersive experiences. We propose GS-Cache, an end-to-end framework that seamlessly integrates 3DGS's advanced representation with a highly optimized rendering system. GS-Cache introduces a cache-centric pipeline to eliminate redundant computations, an efficiency-aware scheduler for elastic multi-GPU rendering, and optimized CUDA kernels to overcome computational bottlenecks. This synergy between 3DGS and system design enables GS-Cache to achieve up to 5.35x performance improvement, 35% latency reduction, and 42% lower GPU memory usage, supporting 2K binocular rendering at over 120 FPS with high visual quality. By bridging the gap between 3DGS's representation power and the demands of VR systems, GS-Cache establishes a scalable and efficient framework for real-time neural rendering in immersive environments.
Related papers
- 3DGabSplat: 3D Gabor Splatting for Frequency-adaptive Radiance Field Rendering [50.04967868036964]
3D Gaussian Splatting (3DGS) has enabled real-time rendering while maintaining high-fidelity novel view synthesis.<n>We propose 3D Gabor Splatting (3DGabSplat) that incorporates a novel 3D Gabor-based primitive with multiple directional 3D frequency responses.<n>We achieve 1.35 dBR gain over 3D with simultaneously reduced number of primitive memory consumption.
arXiv Detail & Related papers (2025-08-07T12:49:44Z) - Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching [57.7533917467934]
EasyCache is a training-free acceleration framework for video diffusion models.<n>We conduct comprehensive studies on various large-scale video generation models, including OpenSora, Wan2.1, and HunyuanVideo.<n>Our method achieves leading acceleration performance, reducing inference time by up to 2.1-3.3$times$ compared to the original baselines.
arXiv Detail & Related papers (2025-07-03T17:59:54Z) - Speedy Deformable 3D Gaussian Splatting: Fast Rendering and Compression of Dynamic Scenes [57.69608119350651]
Recent extensions of 3D Gaussian Splatting (3DGS) to dynamic scenes achieve high-quality novel view synthesis by using neural networks to predict the time-varying deformation of each Gaussian.<n>However, performing per-Gaussian neural inference at every frame poses a significant bottleneck, limiting rendering speed and increasing memory and compute requirements.<n>We present Speedy Deformable 3D Gaussian Splatting (SpeeDe3DGS), a general pipeline for accelerating the rendering speed of dynamic 3DGS and 4DGS representations by reducing neural inference through two complementary techniques.
arXiv Detail & Related papers (2025-06-09T16:30:48Z) - STREAMINGGS: Voxel-Based Streaming 3D Gaussian Splatting with Memory Optimization and Architectural Support [16.4682107511283]
3DGS struggles to meet the real-time requirement of 90 frames per second on resource-constrained mobile devices.<n>Existing accelerators focus on compute efficiency but overlook memory efficiency, leading to redundant DRAM traffic.<n>We introduce STREAMINGGS, a fully streaming 3DGS algorithm-architecture co-design that achieves fine-grained pipelining.
arXiv Detail & Related papers (2025-06-09T07:51:34Z) - FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting [57.97160965244424]
3D Gaussian splatting (3DGS) has enabled various applications in 3D scene representation and novel view synthesis.<n>Previous approaches have focused on pruning less important Gaussians, effectively compressing 3DGS.<n>We present an elastic inference method for 3DGS, achieving substantial rendering performance without additional fine-tuning.
arXiv Detail & Related papers (2025-06-04T17:17:57Z) - LODGE: Level-of-Detail Large-Scale Gaussian Splatting with Efficient Rendering [68.93333348474988]
We present a novel level-of-detail (LOD) method for 3D Gaussian Splatting on memory-constrained devices.<n>Our approach iteratively selects optimal subsets of Gaussians based on camera distance.<n>Our method achieves state-of-the-art performance on both outdoor (Hierarchical 3DGS) and indoor (Zip-NeRF) datasets.
arXiv Detail & Related papers (2025-05-29T06:50:57Z) - Steepest Descent Density Control for Compact 3D Gaussian Splatting [72.54055499344052]
3D Gaussian Splatting (3DGS) has emerged as a powerful real-time, high-resolution novel view.<n>We propose a theoretical framework that demystifies and improves density control in 3DGS.<n>We introduce SteepGS, incorporating steepest density control, a principled strategy that minimizes loss while maintaining a compact point cloud.
arXiv Detail & Related papers (2025-05-08T18:41:38Z) - L3GS: Layered 3D Gaussian Splats for Efficient 3D Scene Delivery [10.900680596540377]
3D Gaussian splats (3DGS) can meet the best of both worlds, with high visual quality and efficient rendering for real-time frame rates.
The goal of this work is to create an efficient 3D content delivery framework that allows users to view high quality 3D scenes with 3DGS as the underlying data representation.
arXiv Detail & Related papers (2025-04-07T21:23:32Z) - CityGS-X: A Scalable Architecture for Efficient and Geometrically Accurate Large-Scale Scene Reconstruction [79.1905347777988]
CityGS-X is a scalable architecture built on a novel parallelized hybrid hierarchical 3D representation (PH2-3D)
It consistently outperforms existing methods in terms of faster training times, larger rendering capacities, and more accurate geometric details in large-scale scenes.
arXiv Detail & Related papers (2025-03-29T11:33:39Z) - DashGaussian: Optimizing 3D Gaussian Splatting in 200 Seconds [71.37326848614133]
We propose DashGaussian, a scheduling scheme over the optimization complexity of 3DGS.
We show that our method accelerates the optimization of various 3DGS backbones by 45.7% on average.
arXiv Detail & Related papers (2025-03-24T07:17:27Z) - GauRast: Enhancing GPU Triangle Rasterizers to Accelerate 3D Gaussian Splatting [3.275890592583965]
3D Gaussian Splatting (3DGS) is an emerging high-quality 3D rendering method.
Previous efforts to accelerate 3DGS rely on dedicated accelerators that require substantial integration overhead and hardware costs.
This work proposes an acceleration strategy that leverages the similarities between the 3DGS pipeline and the highly optimized conventional graphics pipeline.
arXiv Detail & Related papers (2025-03-20T19:54:05Z) - S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction [58.37746062258149]
3D Gaussian Splatting (3DGS) has reshaped the field of 3D reconstruction, achieving impressive rendering quality and speed.
Existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases.
We propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction.
arXiv Detail & Related papers (2025-03-11T09:37:13Z) - LiteGS: A High-Performance Modular Framework for Gaussian Splatting Training [0.21756081703275998]
LiteGS is a high-performance and modular framework that enhances both the efficiency and usability of Gaussian splatting.
LiteGS achieves a 3.4x speedup over the original 3DGS implementation while reducing memory usage by approximately 30%.
arXiv Detail & Related papers (2025-03-03T05:52:02Z) - Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse Primitives [60.217580865237835]
3D Gaussian Splatting (3D-GS) is a recent 3D scene reconstruction technique that enables real-time rendering of novel views by modeling scenes as parametric point clouds of differentiable 3D Gaussians.<n>We identify and address two key inefficiencies in 3D-GS, achieving substantial improvements in rendering speed, model size, and training time.<n>Our Speedy-Splat approach combines these techniques to accelerate average rendering speed by a drastic $6.71times$ across scenes from the Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets with $10.6times$ fewer primitives than 3
arXiv Detail & Related papers (2024-11-30T20:25:56Z) - HiCoM: Hierarchical Coherent Motion for Streamable Dynamic Scene with 3D Gaussian Splatting [7.507657419706855]
This paper proposes an efficient framework, dubbed HiCoM, with three key components.<n>First, we construct a compact and robust initial 3DGS representation using a perturbation smoothing strategy.<n>Next, we introduce a Hierarchical Coherent Motion mechanism that leverages the inherent non-uniform distribution and local consistency of 3D Gaussians.<n>Experiments conducted on two widely used datasets show that our framework improves learning efficiency of the state-of-the-art methods by about $20%$.
arXiv Detail & Related papers (2024-11-12T04:40:27Z) - Fast Feedforward 3D Gaussian Splatting Compression [55.149325473447384]
3D Gaussian Splatting (FCGS) is an optimization-free model that can compress 3DGS representations rapidly in a single feed-forward pass.
FCGS achieves a compression ratio of over 20X while maintaining fidelity, surpassing most per-scene SOTA optimization-based methods.
arXiv Detail & Related papers (2024-10-10T15:13:08Z) - GS-Net: Generalizable Plug-and-Play 3D Gaussian Splatting Module [19.97023389064118]
We propose GS-Net, a plug-and-play 3DGS module that densifies Gaussian ellipsoids from sparse SfM point clouds.
Experiments demonstrate that applying GS-Net to 3DGS yields a PSNR improvement of 2.08 dB for conventional viewpoints and 1.86 dB for novel viewpoints.
arXiv Detail & Related papers (2024-09-17T16:03:19Z) - LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming [4.209963145038135]
XR requires efficient streaming of 3D online worlds, challenging current 3DGS representations to adapt to bandwidth-constrained environments.<n>This paper proposes LapisGS, a layered 3DGS that supports adaptive streaming and progressive rendering.
arXiv Detail & Related papers (2024-08-27T07:06:49Z) - EfficientGS: Streamlining Gaussian Splatting for Large-Scale High-Resolution Scene Representation [29.334665494061113]
'EfficientGS' is an advanced approach that optimize 3DGS for high-resolution, large-scale scenes.
We analyze the densification process in 3DGS and identify areas of Gaussian over-proliferation.
We propose a selective strategy, limiting Gaussian increase to key redundant primitives, thereby enhancing the representational efficiency.
arXiv Detail & Related papers (2024-04-19T10:32:30Z) - 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering [103.32717396287751]
We propose 4D Gaussian Splatting (4D-GS) as a holistic representation for dynamic scenes.
A neuralvoxel encoding algorithm inspired by HexPlane is proposed to efficiently build features from 4D neural voxels.
Our 4D-GS method achieves real-time rendering under high resolutions, 82 FPS at an 800$times$800 resolution on an 3090 GPU.
arXiv Detail & Related papers (2023-10-12T17:21:41Z) - EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense
Prediction [67.11722682878722]
This work presents EfficientViT, a new family of high-resolution vision models with novel multi-scale linear attention.
Our multi-scale linear attention achieves the global receptive field and multi-scale learning.
EfficientViT delivers remarkable performance gains over previous state-of-the-art models.
arXiv Detail & Related papers (2022-05-29T20:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.