ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes
- URL: http://arxiv.org/abs/2503.17486v3
- Date: Tue, 08 Apr 2025 12:19:01 GMT
- Title: ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes
- Authors: Zhengqing Gao, Dongting Hu, Jia-Wang Bian, Huan Fu, Yan Li, Tongliang Liu, Mingming Gong, Kun Zhang,
- Abstract summary: 3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.<n>Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.<n>We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
- Score: 81.48624894781257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required, posing challenges for deployment on lightweight devices. Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency. To overcome these limitations, we propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality. Our method directly uses Gaussian prototypes to enable efficient rendering and leverage the resulting reconstruction loss to guide prototype learning. To further optimize memory efficiency during training, we incorporate structure-from-motion (SfM) points as anchor points to group Gaussian primitives. Gaussian prototypes are derived within each group by clustering of K-means, and both the anchor points and the prototypes are optimized jointly. Our experiments on real-world and synthetic datasets prove that we outperform existing methods, achieving a substantial reduction in the number of Gaussians, and enabling high rendering speed while maintaining or even enhancing rendering fidelity.
Related papers
- Second-order Optimization of Gaussian Splats with Importance Sampling [51.95046424364725]
3D Gaussian Splatting (3DGS) is widely used for novel view rendering due to its high quality and fast inference time.
We propose a novel second-order optimization strategy based on Levenberg-Marquardt (LM) and Conjugate Gradient (CG)
Our method achieves a $3times$ speedup over standard LM and outperforms Adam by $6times$ when the Gaussian count is low.
arXiv Detail & Related papers (2025-04-17T12:52:08Z) - Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction [6.273357335397336]
We propose Generative Densification, an efficient and generalizable method to densify Gaussians generated by feed-forward models.
We show that our method outperforms state-of-the-art approaches with comparable or smaller model sizes.
arXiv Detail & Related papers (2024-12-09T06:20:51Z) - Efficient Density Control for 3D Gaussian Splatting [3.6379656024631215]
3D Gaussian Splatting (3DGS) has demonstrated outstanding performance in novel view synthesis.<n>We propose two key innovations: (1) Long-Axis Split, which precisely controls the position, shape, and opacity of child Gaussians; and (2) Recovery-Aware Pruning, which leverages differences in recovery speed after resetting opacity to prune overfitted Gaussians.
arXiv Detail & Related papers (2024-11-15T12:12:56Z) - Structure Consistent Gaussian Splatting with Matching Prior for Few-shot Novel View Synthesis [28.3325478008559]
We propose SCGaussian, a Structure Consistent Gaussian Splatting method using matching priors to learn 3D consistent scene structure.
We optimize the scene structure in two folds: rendering geometry and, more importantly, the position of Gaussian primitives.
Experiments on forward-facing, surrounding, and complex large scenes show the effectiveness of our approach with state-of-the-art performance and high efficiency.
arXiv Detail & Related papers (2024-11-06T03:28:06Z) - Mipmap-GS: Let Gaussians Deform with Scale-specific Mipmap for Anti-aliasing Rendering [81.88246351984908]
We propose a unified optimization method to make Gaussians adaptive for arbitrary scales.
Inspired by the mipmap technique, we design pseudo ground-truth for the target scale and propose a scale-consistency guidance loss to inject scale information into 3D Gaussians.
Our method outperforms 3DGS in PSNR by an average of 9.25 dB for zoom-in and 10.40 dB for zoom-out.
arXiv Detail & Related papers (2024-08-12T16:49:22Z) - RaDe-GS: Rasterizing Depth in Gaussian Splatting [32.38730602146176]
Gaussian Splatting (GS) has proven to be highly effective in novel view synthesis, achieving high-quality and real-time rendering.
Our work introduces a Chamfer distance error comparable to NeuraLangelo on the DTU dataset and maintains similar computational efficiency as the original 3D GS methods.
arXiv Detail & Related papers (2024-06-03T15:56:58Z) - CompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting [68.94594215660473]
We propose an efficient 3D scene representation, named Compressed Gaussian Splatting (CompGS)
We exploit a small set of anchor primitives for prediction, allowing the majority of primitives to be encapsulated into highly compact residual forms.
Experimental results show that the proposed CompGS significantly outperforms existing methods, achieving superior compactness in 3D scene representation without compromising model accuracy and rendering quality.
arXiv Detail & Related papers (2024-04-15T04:50:39Z) - GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling [55.05713977022407]
We introduce a radiance representation that is both structured and fully explicit and thus greatly facilitates 3D generative modeling.
We derive GaussianCube by first using a novel densification-constrained Gaussian fitting algorithm, which yields high-accuracy fitting.
Experiments conducted on unconditional and class-conditioned object generation, digital avatar creation, and text-to-3D all show that our model synthesis achieves state-of-the-art generation results.
arXiv Detail & Related papers (2024-03-28T17:59:50Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.