Light4GS: Lightweight Compact 4D Gaussian Splatting Generation via Context Model
- URL: http://arxiv.org/abs/2503.13948v1
- Date: Tue, 18 Mar 2025 06:28:13 GMT
- Title: Light4GS: Lightweight Compact 4D Gaussian Splatting Generation via Context Model
- Authors: Mufan Liu, Qi Yang, He Huang, Wenjie Huang, Zhenlong Yuan, Zhu Li, Yiling Xu,
- Abstract summary: 3D view synthesisting (3DGS) has emerged as an efficient high-fidelity paradigm for novel and novel content.<n>To adapt 3DGS for dynamic content, deformable 3DGS incorporates temporally deformable primitives with learnable latent embeddings to capture complex motions.<n>Despite its impressive performance, the high-dimensional embeddings and vast number of primitives lead to substantial storage requirements.
- Score: 21.375070073632944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has emerged as an efficient and high-fidelity paradigm for novel view synthesis. To adapt 3DGS for dynamic content, deformable 3DGS incorporates temporally deformable primitives with learnable latent embeddings to capture complex motions. Despite its impressive performance, the high-dimensional embeddings and vast number of primitives lead to substantial storage requirements. In this paper, we introduce a \textbf{Light}weight \textbf{4}D\textbf{GS} framework, called Light4GS, that employs significance pruning with a deep context model to provide a lightweight storage-efficient dynamic 3DGS representation. The proposed Light4GS is based on 4DGS that is a typical representation of deformable 3DGS. Specifically, our framework is built upon two core components: (1) a spatio-temporal significance pruning strategy that eliminates over 64\% of the deformable primitives, followed by an entropy-constrained spherical harmonics compression applied to the remainder; and (2) a deep context model that integrates intra- and inter-prediction with hyperprior into a coarse-to-fine context structure to enable efficient multiscale latent embedding compression. Our approach achieves over 120x compression and increases rendering FPS up to 20\% compared to the baseline 4DGS, and also superior to frame-wise state-of-the-art 3DGS compression methods, revealing the effectiveness of our Light4GS in terms of both intra- and inter-prediction methods without sacrificing rendering quality.
Related papers
- TED-4DGS: Temporally Activated and Embedding-based Deformation for 4DGS Compression [14.026420167067117]
We present TED-4DGS, a temporally activated and embedding-based deformation scheme for rate-distortion-optimized 4DGS compression.<n>Our scheme achieves state-of-the-art rate-distortion performance on several real-world datasets.
arXiv Detail & Related papers (2025-12-05T05:46:35Z) - P-4DGS: Predictive 4D Gaussian Splatting with 90$\ imes$ Compression [26.130131551764077]
3D Gaussian Splatting (3DGS) has garnered significant attention due to its superior scene representation fidelity and real-time rendering performance.<n>Despite achieving promising results, most existing algorithms overlook the substantial temporal and spatial redundancies inherent in dynamic scenes.<n>We propose P-4DGS, a novel dynamic 3DGS representation for compact 4D scene modeling.
arXiv Detail & Related papers (2025-10-11T05:19:41Z) - D-FCGS: Feedforward Compression of Dynamic Gaussian Splatting for Free-Viewpoint Videos [12.24209693552492]
Free-viewpoint video (FVV) enables immersive 3D experiences, but efficient compression of dynamic 3D representations remains a major challenge.<n>This paper presents Feedforward Compression of Dynamic Gaussian Splatting (D-FCGS), a novel feedforward framework for compressing temporally correlated Gaussian point cloud sequences.<n> Experiments show that it matches the rate-distortion performance of optimization-based methods, achieving over 40 times compression in under 2 seconds.
arXiv Detail & Related papers (2025-07-08T10:39:32Z) - Speedy Deformable 3D Gaussian Splatting: Fast Rendering and Compression of Dynamic Scenes [57.69608119350651]
Recent extensions of 3D Gaussian Splatting (3DGS) to dynamic scenes achieve high-quality novel view synthesis by using neural networks to predict the time-varying deformation of each Gaussian.<n>However, performing per-Gaussian neural inference at every frame poses a significant bottleneck, limiting rendering speed and increasing memory and compute requirements.<n>We present Speedy Deformable 3D Gaussian Splatting (SpeeDe3DGS), a general pipeline for accelerating the rendering speed of dynamic 3DGS and 4DGS representations by reducing neural inference through two complementary techniques.
arXiv Detail & Related papers (2025-06-09T16:30:48Z) - Steepest Descent Density Control for Compact 3D Gaussian Splatting [72.54055499344052]
3D Gaussian Splatting (3DGS) has emerged as a powerful real-time, high-resolution novel view.<n>We propose a theoretical framework that demystifies and improves density control in 3DGS.<n>We introduce SteepGS, incorporating steepest density control, a principled strategy that minimizes loss while maintaining a compact point cloud.
arXiv Detail & Related papers (2025-05-08T18:41:38Z) - 4DGC: Rate-Aware 4D Gaussian Compression for Efficient Streamable Free-Viewpoint Video [56.04182926886754]
3D Gaussian Splatting (3DGS) has substantial potential for enabling photorealistic Free-Viewpoint Video (FVV) experiences.
Existing methods typically handle dynamic 3DGS representation and compression separately, motion information and the rate-distortion trade-off during training.
We propose 4DGC, a rate-aware 4D Gaussian compression framework that significantly reduces storage size while maintaining superior RD performance for FVV.
arXiv Detail & Related papers (2025-03-24T08:05:27Z) - Locality-aware Gaussian Compression for Fast and High-quality Rendering [37.16956462469969]
We present LocoGS, a locality-aware 3D Gaussian Splatting (3DGS) framework that exploits the spatial coherence of 3D Gaussians for compact modeling of scenes.<n>We first analyze the local coherence of 3D Gaussian attributes, and propose a novel locality-aware 3D Gaussian representation that effectively encodes locally-coherent Gaussian attributes.
arXiv Detail & Related papers (2025-01-10T07:19:41Z) - HEMGS: A Hybrid Entropy Model for 3D Gaussian Splatting Data Compression [23.015728369640136]
3D Gaussian Splatting (3DGS) is popular for 3D modeling and image rendering, but this creates big challenges in data storage and transmission.<n>We propose a hybrid entropy model for 3DGS data compression, which comprises two primary components, a hyperprior network and an autoregressive network.<n>Our method achieves about 40% average reduction in size while maintaining the rendering quality over our baseline method and achieving state-of-the-art compression results.
arXiv Detail & Related papers (2024-11-27T16:08:59Z) - MEGA: Memory-Efficient 4D Gaussian Splatting for Dynamic Scenes [49.36091070642661]
This paper introduces a memory-efficient framework for 4DGS.
It achieves a storage reduction by approximately 190$times$ and 125$times$ on the Technicolor and Neural 3D Video datasets.
It maintains comparable rendering speeds and scene representation quality, setting a new standard in the field.
arXiv Detail & Related papers (2024-10-17T14:47:08Z) - Fast Feedforward 3D Gaussian Splatting Compression [55.149325473447384]
3D Gaussian Splatting (FCGS) is an optimization-free model that can compress 3DGS representations rapidly in a single feed-forward pass.<n>FCGS achieves a compression ratio of over 20X while maintaining fidelity, surpassing most per-scene SOTA optimization-based methods.
arXiv Detail & Related papers (2024-10-10T15:13:08Z) - GS-Net: Generalizable Plug-and-Play 3D Gaussian Splatting Module [19.97023389064118]
We propose GS-Net, a plug-and-play 3DGS module that densifies Gaussian ellipsoids from sparse SfM point clouds.
Experiments demonstrate that applying GS-Net to 3DGS yields a PSNR improvement of 2.08 dB for conventional viewpoints and 1.86 dB for novel viewpoints.
arXiv Detail & Related papers (2024-09-17T16:03:19Z) - ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model [77.71796503321632]
We introduce a context model in the anchor level for 3DGS representation, yielding an impressive size reduction of over 100 times compared to vanilla 3DGS.
Our work pioneers the context model in the anchor level for 3DGS representation, yielding an impressive size reduction of over 100 times compared to vanilla 3DGS and 15 times compared to the most recent state-of-the-art work Scaffold-GS.
arXiv Detail & Related papers (2024-05-31T09:23:39Z) - SAGS: Structure-Aware 3D Gaussian Splatting [53.6730827668389]
We propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene.
SAGS reflects to state-of-the-art rendering performance and reduced storage requirements on benchmark novel-view synthesis datasets.
arXiv Detail & Related papers (2024-04-29T23:26:30Z) - DreamGaussian4D: Generative 4D Gaussian Splatting [56.49043443452339]
We introduce DreamGaussian4D (DG4D), an efficient 4D generation framework that builds on Gaussian Splatting (GS)
Our key insight is that combining explicit modeling of spatial transformations with static GS makes an efficient and powerful representation for 4D generation.
Video generation methods have the potential to offer valuable spatial-temporal priors, enhancing the high-quality 4D generation.
arXiv Detail & Related papers (2023-12-28T17:16:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.