Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images
- URL: http://arxiv.org/abs/2503.16338v1
- Date: Thu, 20 Mar 2025 16:56:13 GMT
- Title: Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images
- Authors: Shengjun Zhang, Xin Fei, Fangfu Liu, Haixu Song, Yueqi Duan,
- Abstract summary: 3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis performance.<n>We propose Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian representations.<n>We conduct experiments on the large-scale RealEstate10K and ACID datasets to demonstrate the efficiency and generalization of our method.
- Score: 12.274418254425019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis performance. While conventional methods require per-scene optimization, more recently several feed-forward methods have been proposed to generate pixel-aligned Gaussian representations with a learnable network, which are generalizable to different scenes. However, these methods simply combine pixel-aligned Gaussians from multiple views as scene representations, thereby leading to artifacts and extra memory cost without fully capturing the relations of Gaussians from different images. In this paper, we propose Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian representations. Specifically, we construct Gaussian Graphs to model the relations of Gaussian groups from different views. To support message passing at Gaussian level, we reformulate the basic graph operations over Gaussian representations, enabling each Gaussian to benefit from its connected Gaussian groups with Gaussian feature fusion. Furthermore, we design a Gaussian pooling layer to aggregate various Gaussian groups for efficient representations. We conduct experiments on the large-scale RealEstate10K and ACID datasets to demonstrate the efficiency and generalization of our method. Compared to the state-of-the-art methods, our model uses fewer Gaussians and achieves better image quality with higher rendering speed.
Related papers
- ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes [81.48624894781257]
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.
Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.
We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
arXiv Detail & Related papers (2025-03-21T18:55:14Z) - Textured Gaussians for Enhanced 3D Scene Appearance Modeling [58.134905268540436]
3D Gaussian Splatting (3DGS) has emerged as a state-of-the-art 3D reconstruction and rendering technique.<n>We propose a new generalized Gaussian appearance representation that augments each Gaussian with alpha(A), RGB, or RGBA texture maps.<n>We demonstrate image quality improvements over existing methods while using a similar or lower number of Gaussians.
arXiv Detail & Related papers (2024-11-27T18:59:59Z) - SmileSplat: Generalizable Gaussian Splats for Unconstrained Sparse Images [91.28365943547703]
A novel generalizable Gaussian Splatting method, SmileSplat, is proposed to reconstruct pixel-aligned Gaussian surfels for diverse scenarios.<n>The proposed method achieves state-of-the-art performance in various 3D vision tasks.
arXiv Detail & Related papers (2024-11-27T05:52:28Z) - PixelGaussian: Generalizable 3D Gaussian Reconstruction from Arbitrary Views [116.10577967146762]
PixelGaussian is an efficient framework for learning generalizable 3D Gaussian reconstruction from arbitrary views.
Our method achieves state-of-the-art performance with good generalization to various numbers of views.
arXiv Detail & Related papers (2024-10-24T17:59:58Z) - UniGS: Modeling Unitary 3D Gaussians for Novel View Synthesis from Sparse-view Images [20.089890859122168]
We introduce UniGS, a novel 3D Gaussian reconstruction and novel view synthesis model.
UniGS predicts a high-fidelity representation of 3D Gaussians from arbitrary number of posed sparse-view images.
arXiv Detail & Related papers (2024-10-17T03:48:02Z) - HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction [46.269350101349715]
HiSplat is a novel framework for generalizable 3D Gaussian Splatting.
It generates hierarchical 3D Gaussians via a coarse-to-fine strategy.
It significantly enhances reconstruction quality and cross-dataset generalization.
arXiv Detail & Related papers (2024-10-08T17:59:32Z) - Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos [58.22272760132996]
We show that existing 4D Gaussian methods dramatically fail in this setup because the monocular setting is underconstrained.
We propose Dynamic Gaussian Marbles, which consist of three core modifications that target the difficulties of the monocular setting.
We evaluate on the Nvidia Dynamic Scenes dataset and the DyCheck iPhone dataset, and show that Gaussian Marbles significantly outperforms other Gaussian baselines in quality.
arXiv Detail & Related papers (2024-06-26T19:37:07Z) - GaussianForest: Hierarchical-Hybrid 3D Gaussian Splatting for Compressed Scene Modeling [40.743135560583816]
We introduce the Gaussian-Forest modeling framework, which hierarchically represents a scene as a forest of hybrid 3D Gaussians.
Experiments demonstrate that Gaussian-Forest not only maintains comparable speed and quality but also achieves a compression rate surpassing 10 times.
arXiv Detail & Related papers (2024-06-13T02:41:11Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.