Distilled-3DGS:Distilled 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2508.14037v1
- Date: Tue, 19 Aug 2025 17:59:26 GMT
- Title: Distilled-3DGS:Distilled 3D Gaussian Splatting
- Authors: Lintao Xiang, Xinkai Chen, Jianhuang Lai, Guangcong Wang,
- Abstract summary: We propose the first knowledge distillation framework for 3DGS.<n>It features various teacher models, including vanilla 3DGS, noise-augmented variants, and dropout-regularized versions.<n>It achieves promising rendering results in both rendering quality and storage efficiency compared to state-of-the-art methods.
- Score: 49.098181805161275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has exhibited remarkable efficacy in novel view synthesis (NVS). However, it suffers from a significant drawback: achieving high-fidelity rendering typically necessitates a large number of 3D Gaussians, resulting in substantial memory consumption and storage requirements. To address this challenge, we propose the first knowledge distillation framework for 3DGS, featuring various teacher models, including vanilla 3DGS, noise-augmented variants, and dropout-regularized versions. The outputs of these teachers are aggregated to guide the optimization of a lightweight student model. To distill the hidden geometric structure, we propose a structural similarity loss to boost the consistency of spatial geometric distributions between the student and teacher model. Through comprehensive quantitative and qualitative evaluations across diverse datasets, the proposed Distilled-3DGS, a simple yet effective framework without bells and whistles, achieves promising rendering results in both rendering quality and storage efficiency compared to state-of-the-art methods. Project page: https://distilled3dgs.github.io . Code: https://github.com/lt-xiang/Distilled-3DGS .
Related papers
- FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting [57.97160965244424]
3D Gaussian splatting (3DGS) has enabled various applications in 3D scene representation and novel view synthesis.<n>Previous approaches have focused on pruning less important Gaussians, effectively compressing 3DGS.<n>We present an elastic inference method for 3DGS, achieving substantial rendering performance without additional fine-tuning.
arXiv Detail & Related papers (2025-06-04T17:17:57Z) - 3D Student Splatting and Scooping [10.096129909852795]
3D Gaussian Splatting (3DGS) provides a new framework for novel view synthesis, and has spiked a new wave of research in neural rendering and related applications.<n>We propose a new mixture model consisting of flexible Student's t distributions, with both positive (splatting) and negative (scooping) densities.<n>When providing better expressivity, SSS also poses new challenges in learning.
arXiv Detail & Related papers (2025-03-13T08:20:54Z) - Does 3D Gaussian Splatting Need Accurate Volumetric Rendering? [8.421214057144569]
3D Gaussian Splatting (3DGS) is an important reference method for learning 3D representations of a captured scene.<n>NeRFs, which preceded 3DGS, are based on a principled ray-marching approach for rendering.<n>We present an in-depth analysis of the various approximations and assumptions used by the original 3DGS solution.
arXiv Detail & Related papers (2025-02-26T17:11:26Z) - CLIP-GS: Unifying Vision-Language Representation with 3D Gaussian Splatting [88.24743308058441]
We present CLIP-GS, a novel multimodal representation learning framework grounded in 3DGS.<n>We develop an efficient way to generate triplets of 3DGS, images, and text, facilitating CLIP-GS in learning unified multimodal representations.
arXiv Detail & Related papers (2024-12-26T09:54:25Z) - ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery [11.706262924395768]
We introduce a novel densification operation, residual split, which adds a downscaled Gaussian as a residual.<n>Our approach is capable of adaptively retrieving details and complementing missing geometry.
arXiv Detail & Related papers (2024-12-10T13:19:27Z) - A Lesson in Splats: Teacher-Guided Diffusion for 3D Gaussian Splats Generation with 2D Supervision [65.33043028101471]
We present a novel framework for training 3D image-conditioned diffusion models using only 2D supervision.<n>Most existing 3D generative models rely on full 3D supervision, which is impractical due to the scarcity of large-scale 3D datasets.
arXiv Detail & Related papers (2024-12-01T00:29:57Z) - Connecting Consistency Distillation to Score Distillation for Text-to-3D Generation [32.52588154649761]
We analyze current score distillation methods by connecting theories of consistency distillation to score distillation.
We propose an optimization framework, Guided Consistency Sampling (GCS), integrated with 3D Gaussian Splatting (3DGS) to alleviate those issues.
We introduce a Brightness-Equalized Generation (BEG) scheme in 3DGS rendering to mitigate this issue.
arXiv Detail & Related papers (2024-07-18T15:25:41Z) - LP-3DGS: Learning to Prune 3D Gaussian Splatting [71.97762528812187]
We propose learning-to-prune 3DGS, where a trainable binary mask is applied to the importance score that can find optimal pruning ratio automatically.
Experiments have shown that LP-3DGS consistently produces a good balance that is both efficient and high quality.
arXiv Detail & Related papers (2024-05-29T05:58:34Z) - SAGS: Structure-Aware 3D Gaussian Splatting [53.6730827668389]
We propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene.
SAGS reflects to state-of-the-art rendering performance and reduced storage requirements on benchmark novel-view synthesis datasets.
arXiv Detail & Related papers (2024-04-29T23:26:30Z) - SAGD: Boundary-Enhanced Segment Anything in 3D Gaussian via Gaussian Decomposition [66.56357905500512]
3D Gaussian Splatting has emerged as an alternative 3D representation for novel view synthesis.<n>We propose SAGD, a conceptually simple yet effective boundary-enhanced segmentation pipeline for 3D-GS.<n>Our approach achieves high-quality 3D segmentation without rough boundary issues, which can be easily applied to other scene editing tasks.
arXiv Detail & Related papers (2024-01-31T14:19:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.