PocketGS: On-Device Training of 3D Gaussian Splatting for High Perceptual Modeling
- URL: http://arxiv.org/abs/2601.17354v2
- Date: Wed, 28 Jan 2026 05:29:37 GMT
- Title: PocketGS: On-Device Training of 3D Gaussian Splatting for High Perceptual Modeling
- Authors: Wenzhi Guo, Guangchi Fang, Shu Yang, Bing Wang,
- Abstract summary: We present PocketGS, a mobile scene modeling paradigm that enables on-device 3DGS training under tightly coupled constraints.<n>Our method resolves the fundamental contradictions of standard 3DGS through three co-designed operators.
- Score: 11.717108464366616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient and high-fidelity 3D scene modeling is a long-standing pursuit in computer graphics. While recent 3D Gaussian Splatting (3DGS) methods achieve impressive real-time modeling performance, they rely on resource-unconstrained training assumptions that fail on mobile devices, which are limited by minute-scale training budgets and hardware-available peak-memory. We present PocketGS, a mobile scene modeling paradigm that enables on-device 3DGS training under these tightly coupled constraints while preserving high perceptual fidelity. Our method resolves the fundamental contradictions of standard 3DGS through three co-designed operators: G builds geometry-faithful point-cloud priors; I injects local surface statistics to seed anisotropic Gaussians, thereby reducing early conditioning gaps; and T unrolls alpha compositing with cached intermediates and index-mapped gradient scattering for stable mobile backpropagation. Collectively, these operators satisfy the competing requirements of training efficiency, memory compactness, and modeling fidelity. Extensive experiments demonstrate that PocketGS is able to outperform the powerful mainstream workstation 3DGS baseline to deliver high-quality reconstructions, enabling a fully on-device, practical capture-to-rendering workflow.
Related papers
- Sparse View Distractor-Free Gaussian Splatting [31.812029183156245]
3D Gaussian Splatting (3DGS) enables efficient training and fast novel view in static environments.<n>We propose a framework to enhance distractor-free 3DGS under sparse-view conditions by incorporating rich prior information.
arXiv Detail & Related papers (2026-03-02T08:32:32Z) - MuSASplat: Efficient Sparse-View 3D Gaussian Splats via Lightweight Multi-Scale Adaptation [92.57609195819647]
MuSASplat is a novel framework that dramatically reduces the computational burden of training pose-free feed-forward 3D Gaussian splats models.<n>Central to our approach is a lightweight Multi-Scale Adapter that enables efficient fine-tuning of ViT-based architectures with only a small fraction of training parameters.
arXiv Detail & Related papers (2025-12-08T04:56:46Z) - Intern-GS: Vision Model Guided Sparse-View 3D Gaussian Splatting [95.61137026932062]
Intern-GS is a novel approach to enhance the process of sparse-view Gaussian splatting.<n>We show that Intern-GS achieves state-of-the-art rendering quality across diverse datasets.
arXiv Detail & Related papers (2025-05-27T05:17:49Z) - Taming 3DGS: High-Quality Radiance Fields with Limited Resources [50.92437599516609]
3D Gaussian Splatting (3DGS) has transformed novel-view synthesis with its fast, interpretable, and high-fidelity rendering.
We tackle the challenges of training and rendering 3DGS models on a budget.
We derive faster, numerically equivalent solutions for gradient computation and attribute updates.
arXiv Detail & Related papers (2024-06-21T20:44:23Z) - RetinaGS: Scalable Training for Dense Scene Rendering with Billion-Scale 3D Gaussians [12.461531097629857]
We design a general model parallel training method for 3DGS, named RetinaGS, which uses a proper rendering equation.
We observe a clear positive trend of increasing visual quality when increasing primitive numbers with our method.
We also demonstrate the first attempt at training a 3DGS model with more than one billion primitives on the full MatrixCity dataset.
arXiv Detail & Related papers (2024-06-17T17:59:56Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus [56.45194233357833]
We propose DoGaussian, a method that trains 3DGS distributedly.
Our method accelerates the training of 3DGS by 6+ times when evaluated on large-scale scenes.
arXiv Detail & Related papers (2024-05-22T19:17:58Z) - Bootstrap-GS: Self-Supervised Augmentation for High-Fidelity Gaussian Splatting [9.817215106596146]
3D-GS faces limitations when generating novel views that significantly deviate from those encountered during training.<n>We introduce a bootstrapping framework to address this problem.<n>Our approach synthesizes pseudo-ground truth from novel views that align with the limited training set.
arXiv Detail & Related papers (2024-04-29T12:57:05Z) - GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time [112.32349668385635]
GGRt is a novel approach to generalizable novel view synthesis that alleviates the need for real camera poses.
As the first pose-free generalizable 3D-GS framework, GGRt achieves inference at $ge$ 5 FPS and real-time rendering at $ge$ 100 FPS.
arXiv Detail & Related papers (2024-03-15T09:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.