GDGS: Gradient Domain Gaussian Splatting for Sparse Representation of Radiance Fields
- URL: http://arxiv.org/abs/2405.05446v1
- Date: Wed, 8 May 2024 22:40:52 GMT
- Title: GDGS: Gradient Domain Gaussian Splatting for Sparse Representation of Radiance Fields
- Authors: Yuanhao Gong,
- Abstract summary: In this paper, we propose to model the gradient of the original signal.
The gradients are much sparser than the original signal.
Thanks to the sparsity, during the view synthesis, only a small mount of pixels are needed, leading to much higher computational performance.
- Score: 3.156444853783626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 3D Gaussian splatting methods are getting popular. However, they work directly on the signal, leading to a dense representation of the signal. Even with some techniques such as pruning or distillation, the results are still dense. In this paper, we propose to model the gradient of the original signal. The gradients are much sparser than the original signal. Therefore, the gradients use much less Gaussian splats, leading to the more efficient storage and thus higher computational performance during both training and rendering. Thanks to the sparsity, during the view synthesis, only a small mount of pixels are needed, leading to much higher computational performance ($100\sim 1000\times$ faster). And the 2D image can be recovered from the gradients via solving a Poisson equation with linear computation complexity. Several experiments are performed to confirm the sparseness of the gradients and the computation performance of the proposed method. The method can be applied various applications, such as human body modeling and indoor environment modeling.
Related papers
- Gradient-Driven 3D Segmentation and Affordance Transfer in Gaussian Splatting Using 2D Masks [6.647959476396794]
3D Gaussian Splatting has emerged as a powerful 3D scene representation technique, capturing fine details with high efficiency.
In this paper, we introduce a novel voting-based method that extends 2D segmentation models to 3D Gaussian splats.
The robust yet straightforward mathematical formulation underlying this approach makes it a highly effective tool for numerous downstream applications.
arXiv Detail & Related papers (2024-09-18T03:45:44Z) - Isotropic Gaussian Splatting for Real-Time Radiance Field Rendering [15.498640737050412]
The proposed method can be applied in a large range applications, such as 3D reconstruction, view synthesis, and dynamic object modeling.
The experiments confirm that the proposed method is about bf 100X faster without losing the geometry representation accuracy.
arXiv Detail & Related papers (2024-03-21T09:02:31Z) - How to guess a gradient [68.98681202222664]
We show that gradients are more structured than previously thought.
Exploiting this structure can significantly improve gradient-free optimization schemes.
We highlight new challenges in overcoming the large gap between optimizing with exact gradients and guessing the gradients.
arXiv Detail & Related papers (2023-12-07T21:40:44Z) - HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting [113.37908093915837]
Existing methods optimize 3D representations like mesh or neural fields via score distillation sampling (SDS), which suffers from inadequate fine details or excessive training time.
In this paper, we propose an efficient yet effective framework, HumanGaussian, that generates high-quality 3D humans with fine-grained geometry and realistic appearance.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - On Training Implicit Models [75.20173180996501]
We propose a novel gradient estimate for implicit models, named phantom gradient, that forgoes the costly computation of the exact gradient.
Experiments on large-scale tasks demonstrate that these lightweight phantom gradients significantly accelerate the backward passes in training implicit models by roughly 1.7 times.
arXiv Detail & Related papers (2021-11-09T14:40:24Z) - Adapting Stepsizes by Momentumized Gradients Improves Optimization and
Generalization [89.66571637204012]
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
arXiv Detail & Related papers (2021-06-22T03:13:23Z) - Neural gradients are near-lognormal: improved quantized and sparse
training [35.28451407313548]
We find that the distribution of neural gradients is approximately lognormal.
We suggest two closed-form analytical methods to reduce the computational and memory burdens of neural gradients.
To the best of our knowledge, this paper is the first to (1) quantize the gradients to 6-bit floating-point formats, or (2) achieve up to 85% gradient sparsity -- in each case without accuracy.
arXiv Detail & Related papers (2020-06-15T07:00:15Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.