GaussianPOP: Principled Simplification Framework for Compact 3D Gaussian Splatting via Error Quantification
- URL: http://arxiv.org/abs/2602.06830v1
- Date: Fri, 06 Feb 2026 16:17:41 GMT
- Title: GaussianPOP: Principled Simplification Framework for Compact 3D Gaussian Splatting via Error Quantification
- Authors: Soonbin Lee, Yeong-Gyu Kim, Simon Sasse, Tomas M. Borges, Yago Sanchez, Eun-Seok Ryu, Thomas Schierl, Cornelius Hellge,
- Abstract summary: GaussianPOP is a principled simplification framework based on analytical Gaussian error quantification.<n>By introducing a highly efficient algorithm, our framework enables practical error calculation in a single forward pass.<n>We show that our method consistently outperforms existing state-of-the-art pruning methods across both application scenarios.
- Score: 5.355982355439107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing 3D Gaussian Splatting simplification methods commonly use importance scores, such as blending weights or sensitivity, to identify redundant Gaussians. However, these scores are not driven by visual error metrics, often leading to suboptimal trade-offs between compactness and rendering fidelity. We present GaussianPOP, a principled simplification framework based on analytical Gaussian error quantification. Our key contribution is a novel error criterion, derived directly from the 3DGS rendering equation, that precisely measures each Gaussian's contribution to the rendered image. By introducing a highly efficient algorithm, our framework enables practical error calculation in a single forward pass. The framework is both accurate and flexible, supporting on-training pruning as well as post-training simplification via iterative error re-quantification for improved stability. Experimental results show that our method consistently outperforms existing state-of-the-art pruning methods across both application scenarios, achieving a superior trade-off between model compactness and high rendering quality.
Related papers
- Prune Wisely, Reconstruct Sharply: Compact 3D Gaussian Splatting via Adaptive Pruning and Difference-of-Gaussian Primitives [14.295266671241004]
3D Gaussian Splatting (3DGS) has enabled real-time rendering with photorealistic quality.<n>3DGS often requires a large number of primitives to achieve high fidelity.<n>We propose an efficient, integrated reconstruction-aware pruning strategy that determines pruning timing and refining intervals.<n>We also introduce a 3D Difference-of-Gaussians primitive that jointly models both positive and negative densities in a single primitive.
arXiv Detail & Related papers (2026-02-27T16:12:58Z) - UGOD: Uncertainty-Guided Differentiable Opacity and Soft Dropout for Enhanced Sparse-View 3DGS [8.78995910690481]
3D Gaussian Splatting (3DGS) has become a competitive approach for novel view synthesis (NVS)<n>We investigate how adaptive weighting of Gaussians affects rendering quality, which is characterised by learned uncertainties proposed.<n>Our method achieves 3.27% PSNR improvements on the MipNeRF 360 dataset.
arXiv Detail & Related papers (2025-08-07T01:42:22Z) - Metropolis-Hastings Sampling for 3D Gaussian Reconstruction [31.840492077537018]
We propose an adaptive sampling framework for 3D Gaussian Splatting (3DGS)<n>Our framework overcomes limitations by reformulating densification and pruning as a probabilistic sampling process.<n>Our approach achieves faster convergence while matching or modestly surpassing the view-synthesis quality of state-of-the-art models.
arXiv Detail & Related papers (2025-06-15T19:12:37Z) - Micro-splatting: Multistage Isotropy-informed Covariance Regularization Optimization for High-Fidelity 3D Gaussian Splatting [1.5582756275568836]
Micro-Splatting is a unified, in-training pipeline that preserves visual detail while drastically reducing model complexity.<n>On four object-centric benchmarks, Micro-Splatting reduces splat count and model size by up to 60% and shortens training by 20%.<n>Results demonstrate that Micro-Splatting delivers both compactness and high fidelity in a single, efficient, end-to-end framework.
arXiv Detail & Related papers (2025-04-08T07:15:58Z) - ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes [81.48624894781257]
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.<n>Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.<n>We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
arXiv Detail & Related papers (2025-03-21T18:55:14Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.<n>Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - MCGS: Multiview Consistency Enhancement for Sparse-View 3D Gaussian Radiance Fields [100.90743697473232]
Radiance fields represented by 3D Gaussians excel at synthesizing novel views, offering both high training efficiency and fast rendering.<n>Existing methods often incorporate depth priors from dense estimation networks but overlook the inherent multi-view consistency in input images.<n>We propose a view synthesis framework based on 3D Gaussian Splatting, enabling scene reconstruction from sparse views.
arXiv Detail & Related papers (2024-10-15T08:39:05Z) - CompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting [68.94594215660473]
We propose an efficient 3D scene representation, named Compressed Gaussian Splatting (CompGS)
We exploit a small set of anchor primitives for prediction, allowing the majority of primitives to be encapsulated into highly compact residual forms.
Experimental results show that the proposed CompGS significantly outperforms existing methods, achieving superior compactness in 3D scene representation without compromising model accuracy and rendering quality.
arXiv Detail & Related papers (2024-04-15T04:50:39Z) - StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering [42.91830228828405]
We present a novel hierarchicalization approach that culls splats with minimal processing overhead.
Our approach is only 4% slower on average than the original Gaussian Splatting.
rendering performance is nearly doubled, making our approach 1.6x faster than the original Gaussian Splatting.
arXiv Detail & Related papers (2024-02-01T11:46:44Z) - GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting [51.96353586773191]
We introduce textbfGS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping system.
Our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering.
Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets.
arXiv Detail & Related papers (2023-11-20T12:08:23Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.