FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2506.04174v1
- Date: Wed, 04 Jun 2025 17:17:57 GMT
- Title: FlexGS: Train Once, Deploy Everywhere with Many-in-One Flexible 3D Gaussian Splatting
- Authors: Hengyu Liu, Yuehao Wang, Chenxin Li, Ruisi Cai, Kevin Wang, Wuyang Li, Pavlo Molchanov, Peihao Wang, Zhangyang Wang,
- Abstract summary: 3D Gaussian splatting (3DGS) has enabled various applications in 3D scene representation and novel view synthesis.<n>Previous approaches have focused on pruning less important Gaussians, effectively compressing 3DGS.<n>We present an elastic inference method for 3DGS, achieving substantial rendering performance without additional fine-tuning.
- Score: 57.97160965244424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian splatting (3DGS) has enabled various applications in 3D scene representation and novel view synthesis due to its efficient rendering capabilities. However, 3DGS demands relatively significant GPU memory, limiting its use on devices with restricted computational resources. Previous approaches have focused on pruning less important Gaussians, effectively compressing 3DGS but often requiring a fine-tuning stage and lacking adaptability for the specific memory needs of different devices. In this work, we present an elastic inference method for 3DGS. Given an input for the desired model size, our method selects and transforms a subset of Gaussians, achieving substantial rendering performance without additional fine-tuning. We introduce a tiny learnable module that controls Gaussian selection based on the input percentage, along with a transformation module that adjusts the selected Gaussians to complement the performance of the reduced model. Comprehensive experiments on ZipNeRF, MipNeRF and Tanks\&Temples scenes demonstrate the effectiveness of our approach. Code is available at https://flexgs.github.io.
Related papers
- Speedy Deformable 3D Gaussian Splatting: Fast Rendering and Compression of Dynamic Scenes [57.69608119350651]
Recent extensions of 3D Gaussian Splatting (3DGS) to dynamic scenes achieve high-quality novel view synthesis by using neural networks to predict the time-varying deformation of each Gaussian.<n>However, performing per-Gaussian neural inference at every frame poses a significant bottleneck, limiting rendering speed and increasing memory and compute requirements.<n>We present Speedy Deformable 3D Gaussian Splatting (SpeeDe3DGS), a general pipeline for accelerating the rendering speed of dynamic 3DGS and 4DGS representations by reducing neural inference through two complementary techniques.
arXiv Detail & Related papers (2025-06-09T16:30:48Z) - GaussianSpa: An "Optimizing-Sparsifying" Simplification Framework for Compact and High-Quality 3D Gaussian Splatting [12.342660713851227]
3D Gaussian Splatting (3DGS) has emerged as a mainstream for novel view synthesis, leveraging continuous aggregations of Gaussian functions.<n>3DGS suffers from substantial memory requirements to store the multitude of Gaussians, hindering its practicality.<n>We introduce GaussianSpa, an optimization-based simplification framework for compact and high-quality 3DGS.
arXiv Detail & Related papers (2024-11-09T00:38:06Z) - FLoD: Integrating Flexible Level of Detail into 3D Gaussian Splatting for Customizable Rendering [8.838958391604175]
3D Gaussian Splatting (3DGS) achieves fast and high-quality renderings by using numerous small Gaussians.
This reliance on a large number of Gaussians restricts the application of 3DGS-based models on low-cost devices due to memory limitations.
We propose integrating a Flexible Level of Detail (FLoD) to 3DGS, to allow a scene to be rendered at varying levels of detail according to hardware capabilities.
arXiv Detail & Related papers (2024-08-23T07:56:25Z) - Mipmap-GS: Let Gaussians Deform with Scale-specific Mipmap for Anti-aliasing Rendering [81.88246351984908]
We propose a unified optimization method to make Gaussians adaptive for arbitrary scales.
Inspired by the mipmap technique, we design pseudo ground-truth for the target scale and propose a scale-consistency guidance loss to inject scale information into 3D Gaussians.
Our method outperforms 3DGS in PSNR by an average of 9.25 dB for zoom-in and 10.40 dB for zoom-out.
arXiv Detail & Related papers (2024-08-12T16:49:22Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - GSGAN: Adversarial Learning for Hierarchical Generation of 3D Gaussian Splats [20.833116566243408]
In this paper, we exploit Gaussian as a 3D representation for 3D GANs by leveraging its efficient and explicit characteristics.
We introduce a generator architecture with a hierarchical multi-scale Gaussian representation that effectively regularizes the position and scale of generated Gaussians.
Experimental results demonstrate that ours achieves a significantly faster rendering speed (x100) compared to state-of-the-art 3D consistent GANs.
arXiv Detail & Related papers (2024-06-05T05:52:20Z) - F-3DGS: Factorized Coordinates and Representations for 3D Gaussian Splatting [13.653629893660218]
We propose Factorized 3D Gaussian Splatting (F-3DGS) as an alternative to neural radiance field (NeRF) rendering methods.
F-3DGS achieves a significant reduction in storage costs while maintaining comparable quality in rendered images.
arXiv Detail & Related papers (2024-05-27T11:55:49Z) - DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus [56.45194233357833]
We propose DoGaussian, a method that trains 3DGS distributedly.
Our method accelerates the training of 3DGS by 6+ times when evaluated on large-scale scenes.
arXiv Detail & Related papers (2024-05-22T19:17:58Z) - EfficientGS: Streamlining Gaussian Splatting for Large-Scale High-Resolution Scene Representation [29.334665494061113]
'EfficientGS' is an advanced approach that optimize 3DGS for high-resolution, large-scale scenes.
We analyze the densification process in 3DGS and identify areas of Gaussian over-proliferation.
We propose a selective strategy, limiting Gaussian increase to key redundant primitives, thereby enhancing the representational efficiency.
arXiv Detail & Related papers (2024-04-19T10:32:30Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.