EntropyGS: An Efficient Entropy Coding on 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2508.10227v1
- Date: Wed, 13 Aug 2025 22:48:49 GMT
- Title: EntropyGS: An Efficient Entropy Coding on 3D Gaussian Splatting
- Authors: Yuning Huang, Jiahao Pang, Fengqing Zhu, Dong Tian,
- Abstract summary: 3DGS demonstrates fast training/rendering with superior visual quality.<n>We begin with a correlation and statistical analysis of 3DGS Gaussian attributes.<n>A factorized and parameterized entropy coding method, EntropyGS, is proposed.
- Score: 10.987074189295367
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As an emerging novel view synthesis approach, 3D Gaussian Splatting (3DGS) demonstrates fast training/rendering with superior visual quality. The two tasks of 3DGS, Gaussian creation and view rendering, are typically separated over time or devices, and thus storage/transmission and finally compression of 3DGS Gaussians become necessary. We begin with a correlation and statistical analysis of 3DGS Gaussian attributes. An inspiring finding in this work reveals that spherical harmonic AC attributes precisely follow Laplace distributions, while mixtures of Gaussian distributions can approximate rotation, scaling, and opacity. Additionally, harmonic AC attributes manifest weak correlations with other attributes except for inherited correlations from a color space. A factorized and parameterized entropy coding method, EntropyGS, is hereinafter proposed. During encoding, distribution parameters of each Gaussian attribute are estimated to assist their entropy coding. The quantization for entropy coding is adaptively performed according to Gaussian attribute types. EntropyGS demonstrates about 30x rate reduction on benchmark datasets while maintaining similar rendering quality compared to input 3DGS data, with a fast encoding and decoding time.
Related papers
- Joint Semantic and Rendering Enhancements in 3D Gaussian Modeling with Anisotropic Local Encoding [86.55824709875598]
We propose a joint enhancement framework for 3D semantic Gaussian modeling that synergizes both semantic and rendering branches.<n>Unlike conventional point cloud shape encoding, we introduce an anisotropic 3D Gaussian Chebyshev descriptor to capture fine-grained 3D shape details.<n>We employ a cross-scene knowledge transfer module to continuously update learned shape patterns, enabling faster convergence and robust representations.
arXiv Detail & Related papers (2026-01-05T18:33:50Z) - Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting [39.014517076251934]
3D Gaussian Splatting (3DGS) has demonstrated impressive performance in novel view synthesis under dense-view settings.<n>In sparse-view scenarios, despite the realistic renderings in training views, 3DGS occasionally manifests appearance artifacts in novel views.<n>This paper investigates the appearance artifacts in sparse-view 3DGS and uncovers a core limitation of current approaches.
arXiv Detail & Related papers (2025-08-18T08:34:49Z) - TC-GS: Tri-plane based compression for 3D Gaussian Splatting [28.502636841299356]
3D Gaussian Splatting (3DGS) has emerged as a prominent framework for novel view synthesis, providing high fidelity and rapid rendering speed.<n>We propose a well-structured tri-plane to encode Gaussian attributes, leveraging the distribution of attributes for compression.<n>Our approach has achieved results comparable to or surpass that of SOTA 3D Gaussians Splatting compression work in extensive experiments across multiple datasets.
arXiv Detail & Related papers (2025-03-26T04:26:22Z) - DiffGS: Functional Gaussian Splatting Diffusion [33.07847512591061]
3D Gaussian Splatting (3DGS) has shown convincing performance in rendering speed and fidelity.
However, the generation of Gaussian Splatting remains a challenge due to its discreteness and unstructured nature.
We propose DiffGS, a general Gaussian generator based on latent diffusion models.
arXiv Detail & Related papers (2024-10-25T16:08:08Z) - PixelGaussian: Generalizable 3D Gaussian Reconstruction from Arbitrary Views [116.10577967146762]
PixelGaussian is an efficient framework for learning generalizable 3D Gaussian reconstruction from arbitrary views.
Our method achieves state-of-the-art performance with good generalization to various numbers of views.
arXiv Detail & Related papers (2024-10-24T17:59:58Z) - MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation [16.68306233403755]
3D Gaussian Splatting demonstrates excellent quality and speed in novel view synthesis.
The huge file size of the 3D Gaussians presents challenges for transmission and storage.
MesonGS significantly reduces the size of 3D Gaussians while preserving competitive quality.
arXiv Detail & Related papers (2024-09-15T14:58:20Z) - ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [112.40071212468843]
3D Gaussian Splatting (3DGS) has become the de facto method of 3D representation in many vision tasks.<n>We build a large-scale dataset of 3DGS using the commonly used ShapeNet, ModelNet and averse.<n>We introduce Gaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression [55.6351304553003]
3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis.
We propose a Hash-grid Assisted Context (HAC) framework for highly compact 3DGS representation.
Our work is the pioneer to explore context-based compression for 3DGS representation, resulting in a remarkable size reduction of over $75times$ compared to vanilla 3DGS.
arXiv Detail & Related papers (2024-03-21T16:28:58Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.