ExGS: Extreme 3D Gaussian Compression with Diffusion Priors
- URL: http://arxiv.org/abs/2509.24758v4
- Date: Tue, 07 Oct 2025 03:45:59 GMT
- Title: ExGS: Extreme 3D Gaussian Compression with Diffusion Priors
- Authors: Jiaqi Chen, Xinhao Ji, Yuanyuan Gao, Hao Li, Yuning Gong, Yifei Liu, Dan Xu, Zhihang Zhong, Dingwen Zhang, Xiao Sun,
- Abstract summary: We introduce ExGS and GaussPainter for Extreme 3DGS compression.<n>GassPainter fills in missing regions and enhances visible pixels, yielding substantial improvements in degraded renderings.<n>Our framework can even achieve over 100X compression (reducing a typical 354.77 MB model to about 3.31 MB)
- Score: 60.7245825868903
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural scene representations, such as 3D Gaussian Splatting (3DGS), have enabled high-quality neural rendering; however, their large storage and transmission costs hinder deployment in resource-constrained environments. Existing compression methods either rely on costly optimization, which is slow and scene-specific, or adopt training-free pruning and quantization, which degrade rendering quality under high compression ratios. In contrast, recent data-driven approaches provide a promising direction to overcome this trade-off, enabling efficient compression while preserving high rendering quality. We introduce ExGS, a novel feed-forward framework that unifies Universal Gaussian Compression (UGC) with GaussPainter for Extreme 3DGS compression. UGC performs re-optimization-free pruning to aggressively reduce Gaussian primitives while retaining only essential information, whereas GaussPainter leverages powerful diffusion priors with mask-guided refinement to restore high-quality renderings from heavily pruned Gaussian scenes. Unlike conventional inpainting, GaussPainter not only fills in missing regions but also enhances visible pixels, yielding substantial improvements in degraded renderings. To ensure practicality, it adopts a lightweight VAE and a one-step diffusion design, enabling real-time restoration. Our framework can even achieve over 100X compression (reducing a typical 354.77 MB model to about 3.31 MB) while preserving fidelity and significantly improving image quality under challenging conditions. These results highlight the central role of diffusion priors in bridging the gap between extreme compression and high-quality neural rendering. Our code repository will be released at: https://github.com/chenttt2001/ExGS
Related papers
- CSGaussian: Progressive Rate-Distortion Compression and Segmentation for 3D Gaussian Splatting [57.73006852239138]
We present the first unified framework for rate-distortion-optimized compression and segmentation of 3D Gaussian Splatting (3DGS)<n>Inspired by recent advances in rate-distortion-optimized 3DGS compression, this work integrates semantic learning into the compression pipeline to support decoder-side applications.<n>Our scheme features a lightweight implicit neural representation-based hyperprior, enabling efficient entropy coding of both color and semantic attributes.
arXiv Detail & Related papers (2026-01-19T08:21:45Z) - Leveraging Learned Image Prior for 3D Gaussian Compression [47.29061692878941]
We introduce a novel 3DGS compression framework that leverages the powerful representational capacity of learned image priors to recover compression-induced quality degradation.<n>Our framework is designed to be compatible with existing Gaussian compression methods, making it broadly applicable across different baselines.
arXiv Detail & Related papers (2025-10-16T14:10:02Z) - SODiff: Semantic-Oriented Diffusion Model for JPEG Compression Artifacts Removal [50.90827365790281]
SODiff is a semantic-oriented one-step diffusion model for JPEG artifacts removal.<n>Our core idea is that effective restoration hinges on providing semantic-oriented guidance to the pre-trained diffusion model.<n>SAIPE extracts rich features from low-quality (LQ) images and projects them into an embedding space semantically aligned with that of the text encoder.
arXiv Detail & Related papers (2025-08-10T13:48:07Z) - SA-3DGS: A Self-Adaptive Compression Method for 3D Gaussian Splatting [7.2885462122720455]
Recent advancements in 3D Gaussian Splatting have enhanced efficient and high-quality novel view synthesis.<n> representing scenes requires a large number of Gaussian points, leading to high storage demands and limiting practical deployment.<n>We propose SA-3DGS, a method that significantly reduces storage costs while maintaining rendering quality.
arXiv Detail & Related papers (2025-08-05T02:55:47Z) - FlexGaussian: Flexible and Cost-Effective Training-Free Compression for 3D Gaussian Splatting [15.08192728318416]
3D Gaussian splatting has become a prominent technique for representing and rendering complex 3D scenes.<n>Existing compression methods effectively reduce 3D Gaussian parameters but often require extensive retraining or fine-tuning.<n>We introduce FlexGaussian, a flexible and cost-effective method that combines mixed-precision quantization with attribute-discriminative pruning.
arXiv Detail & Related papers (2025-07-09T09:00:52Z) - Optimized Minimal 3D Gaussian Splatting [36.53860885419386]
3D Gaussian Splatting (3DGS) has emerged as a powerful representation for real-time, high-performance rendering.<n> representing 3D scenes with numerous explicit Gaussian primitives imposes significant storage and memory overhead.<n>We propose a compact and precise attribute representation that efficiently captures both continuity and irregularity among primitives.
arXiv Detail & Related papers (2025-03-21T07:41:45Z) - Fast Feedforward 3D Gaussian Splatting Compression [55.149325473447384]
3D Gaussian Splatting (FCGS) is an optimization-free model that can compress 3DGS representations rapidly in a single feed-forward pass.<n>FCGS achieves a compression ratio of over 20X while maintaining fidelity, surpassing most per-scene SOTA optimization-based methods.
arXiv Detail & Related papers (2024-10-10T15:13:08Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression [55.6351304553003]
3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis.
We propose a Hash-grid Assisted Context (HAC) framework for highly compact 3DGS representation.
Our work is the pioneer to explore context-based compression for 3DGS representation, resulting in a remarkable size reduction of over $75times$ compared to vanilla 3DGS.
arXiv Detail & Related papers (2024-03-21T16:28:58Z) - LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS [55.85673901231235]
We introduce LightGaussian, a method for transforming 3D Gaussians into a more compact format.
Inspired by Network Pruning, LightGaussian identifies Gaussians with minimal global significance on scene reconstruction.
LightGaussian achieves an average 15x compression rate while boosting FPS from 144 to 237 within the 3D-GS framework.
arXiv Detail & Related papers (2023-11-28T21:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.