SuperGaussians: Enhancing Gaussian Splatting Using Primitives with Spatially Varying Colors
- URL: http://arxiv.org/abs/2411.18966v1
- Date: Thu, 28 Nov 2024 07:36:22 GMT
- Title: SuperGaussians: Enhancing Gaussian Splatting Using Primitives with Spatially Varying Colors
- Authors: Rui Xu, Wenyue Chen, Jiepeng Wang, Yuan Liu, Peng Wang, Lin Gao, Shiqing Xin, Taku Komura, Xin Li, Wenping Wang,
- Abstract summary: We introduce a new method called SuperGaussians that utilizes spatially varying colors and opacity in a single Gaussian primitive to improve its representation ability.
We have implemented bilinear, movable kernels, and even tiny neural networks as spatially varying functions.
- Score: 51.54964131894217
- License:
- Abstract: Gaussian Splattings demonstrate impressive results in multi-view reconstruction based on Gaussian explicit representations. However, the current Gaussian primitives only have a single view-dependent color and an opacity to represent the appearance and geometry of the scene, resulting in a non-compact representation. In this paper, we introduce a new method called SuperGaussians that utilizes spatially varying colors and opacity in a single Gaussian primitive to improve its representation ability. We have implemented bilinear interpolation, movable kernels, and even tiny neural networks as spatially varying functions. Quantitative and qualitative experimental results demonstrate that all three functions outperform the baseline, with the best movable kernels achieving superior novel view synthesis performance on multiple datasets, highlighting the strong potential of spatially varying functions.
Related papers
- SmileSplat: Generalizable Gaussian Splats for Unconstrained Sparse Images [91.28365943547703]
A novel generalizable Gaussian Splatting method, SmileSplat, is proposed to reconstruct pixel-aligned Gaussian surfels for diverse scenarios.
The proposed method achieves state-of-the-art performance in various 3D vision tasks.
arXiv Detail & Related papers (2024-11-27T05:52:28Z) - Geometric Algebra Planes: Convex Implicit Neural Volumes [70.12234371845445]
We show that GA-Planes is equivalent to a sparse low-rank factor plus low-resolution matrix.
We also show that GA-Planes can be adapted for many existing representations.
arXiv Detail & Related papers (2024-11-20T18:21:58Z) - DiffGS: Functional Gaussian Splatting Diffusion [33.07847512591061]
3D Gaussian Splatting (3DGS) has shown convincing performance in rendering speed and fidelity.
However, the generation of Gaussian Splatting remains a challenge due to its discreteness and unstructured nature.
We propose DiffGS, a general Gaussian generator based on latent diffusion models.
arXiv Detail & Related papers (2024-10-25T16:08:08Z) - PixelGaussian: Generalizable 3D Gaussian Reconstruction from Arbitrary Views [116.10577967146762]
PixelGaussian is an efficient framework for learning generalizable 3D Gaussian reconstruction from arbitrary views.
Our method achieves state-of-the-art performance with good generalization to various numbers of views.
arXiv Detail & Related papers (2024-10-24T17:59:58Z) - Implicit Gaussian Splatting with Efficient Multi-Level Tri-Plane Representation [45.582869951581785]
Implicit Gaussian Splatting (IGS) is an innovative hybrid model that integrates explicit point clouds with implicit feature embeddings.
We introduce a level-based progressive training scheme, which incorporates explicit spatial regularization.
Our algorithm can deliver high-quality rendering using only a few MBs, effectively balancing storage efficiency and rendering fidelity.
arXiv Detail & Related papers (2024-08-19T14:34:17Z) - Textured-GS: Gaussian Splatting with Spatially Defined Color and Opacity [7.861993966048637]
We introduce Textured-GS, an innovative method for rendering Gaussian splatting using Spherical Harmonics (SH)
This approach enables each Gaussian to exhibit a richer representation by accommodating varying colors and opacities across its surface.
Our experiments show that Textured-GS consistently outperforms both the baseline Mini-Splatting and standard 3DGS in terms of visual fidelity.
arXiv Detail & Related papers (2024-07-13T00:45:37Z) - RT-GS2: Real-Time Generalizable Semantic Segmentation for 3D Gaussian Representations of Radiance Fields [6.071025178912125]
We introduce RT-GS2, the first generalizable semantic segmentation method employing Gaussian Splatting.
Our method achieves real-time performance of 27.03 FPS, marking an astonishing 901 times speedup compared to existing approaches.
arXiv Detail & Related papers (2024-05-28T10:34:28Z) - Mini-Splatting: Representing Scenes with a Constrained Number of Gaussians [4.733612131945549]
In this study, we explore the challenge of efficiently representing scenes with a constrained number of Gaussians.
We introduce strategies for densification including blur split and depth reinitialization, and simplification through intersection preserving and sampling.
Our Mini-Splatting integrates seamlessly with the originalization pipeline, providing a strong baseline for future research in Gaussian-Splatting-based works.
arXiv Detail & Related papers (2024-03-21T06:34:46Z) - Mesh-based Gaussian Splatting for Real-time Large-scale Deformation [58.18290393082119]
It is challenging for users to directly deform or manipulate implicit representations with large deformations in the real-time fashion.
We develop a novel GS-based method that enables interactive deformation.
Our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate.
arXiv Detail & Related papers (2024-02-07T12:36:54Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.