PixelGaussian: Generalizable 3D Gaussian Reconstruction from Arbitrary Views
- URL: http://arxiv.org/abs/2410.18979v1
- Date: Thu, 24 Oct 2024 17:59:58 GMT
- Title: PixelGaussian: Generalizable 3D Gaussian Reconstruction from Arbitrary Views
- Authors: Xin Fei, Wenzhao Zheng, Yueqi Duan, Wei Zhan, Masayoshi Tomizuka, Kurt Keutzer, Jiwen Lu,
- Abstract summary: PixelGaussian is an efficient framework for learning generalizable 3D Gaussian reconstruction from arbitrary views.
Our method achieves state-of-the-art performance with good generalization to various numbers of views.
- Score: 116.10577967146762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose PixelGaussian, an efficient feed-forward framework for learning generalizable 3D Gaussian reconstruction from arbitrary views. Most existing methods rely on uniform pixel-wise Gaussian representations, which learn a fixed number of 3D Gaussians for each view and cannot generalize well to more input views. Differently, our PixelGaussian dynamically adapts both the Gaussian distribution and quantity based on geometric complexity, leading to more efficient representations and significant improvements in reconstruction quality. Specifically, we introduce a Cascade Gaussian Adapter to adjust Gaussian distribution according to local geometry complexity identified by a keypoint scorer. CGA leverages deformable attention in context-aware hypernetworks to guide Gaussian pruning and splitting, ensuring accurate representation in complex regions while reducing redundancy. Furthermore, we design a transformer-based Iterative Gaussian Refiner module that refines Gaussian representations through direct image-Gaussian interactions. Our PixelGaussian can effectively reduce Gaussian redundancy as input views increase. We conduct extensive experiments on the large-scale ACID and RealEstate10K datasets, where our method achieves state-of-the-art performance with good generalization to various numbers of views. Code: https://github.com/Barrybarry-Smith/PixelGaussian.
Related papers
- ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes [81.48624894781257]
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.
Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.
We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
arXiv Detail & Related papers (2025-03-21T18:55:14Z) - Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images [12.274418254425019]
3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis performance.
We propose Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian representations.
We conduct experiments on the large-scale RealEstate10K and ACID datasets to demonstrate the efficiency and generalization of our method.
arXiv Detail & Related papers (2025-03-20T16:56:13Z) - GaussianFormer-2: Probabilistic Gaussian Superposition for Efficient 3D Occupancy Prediction [55.60972844777044]
3D semantic occupancy prediction is an important task for robust vision-centric autonomous driving.
Most existing methods leverage dense grid-based scene representations, overlooking the spatial sparsity of the driving scenes.
We propose a probabilistic Gaussian superposition model which interprets each Gaussian as a probability distribution of its neighborhood being occupied.
arXiv Detail & Related papers (2024-12-05T17:59:58Z) - GaussianSpa: An "Optimizing-Sparsifying" Simplification Framework for Compact and High-Quality 3D Gaussian Splatting [12.342660713851227]
3D Gaussian Splatting (3DGS) has emerged as a mainstream for novel view synthesis, leveraging continuous aggregations of Gaussian functions.
3DGS suffers from substantial memory requirements to store the multitude of Gaussians, hindering its practicality.
We introduce GaussianSpa, an optimization-based simplification framework for compact and high-quality 3DGS.
arXiv Detail & Related papers (2024-11-09T00:38:06Z) - UniGS: Modeling Unitary 3D Gaussians for Novel View Synthesis from Sparse-view Images [20.089890859122168]
We introduce UniGS, a novel 3D Gaussian reconstruction and novel view synthesis model.
UniGS predicts a high-fidelity representation of 3D Gaussians from arbitrary number of posed sparse-view images.
arXiv Detail & Related papers (2024-10-17T03:48:02Z) - HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction [46.269350101349715]
HiSplat is a novel framework for generalizable 3D Gaussian Splatting.
It generates hierarchical 3D Gaussians via a coarse-to-fine strategy.
It significantly enhances reconstruction quality and cross-dataset generalization.
arXiv Detail & Related papers (2024-10-08T17:59:32Z) - GaussianForest: Hierarchical-Hybrid 3D Gaussian Splatting for Compressed Scene Modeling [40.743135560583816]
We introduce the Gaussian-Forest modeling framework, which hierarchically represents a scene as a forest of hybrid 3D Gaussians.
Experiments demonstrate that Gaussian-Forest not only maintains comparable speed and quality but also achieves a compression rate surpassing 10 times.
arXiv Detail & Related papers (2024-06-13T02:41:11Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - RTG-SLAM: Real-time 3D Reconstruction at Scale using Gaussian Splatting [51.51310922527121]
We present a real-time 3D reconstruction system with an RGBD camera for large-scale environments using Gaussian splatting.
We force each Gaussian to be either opaque or nearly transparent, with the opaque ones fitting the surface and dominant colors, and transparent ones fitting residual colors.
We show real-time reconstructions of a variety of large scenes and show superior performance in the realism of novel view synthesis and camera tracking accuracy.
arXiv Detail & Related papers (2024-04-30T16:54:59Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z) - Mesh-based Gaussian Splatting for Real-time Large-scale Deformation [58.18290393082119]
It is challenging for users to directly deform or manipulate implicit representations with large deformations in the real-time fashion.
We develop a novel GS-based method that enables interactive deformation.
Our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate.
arXiv Detail & Related papers (2024-02-07T12:36:54Z) - LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS [55.85673901231235]
We introduce LightGaussian, a method for transforming 3D Gaussians into a more compact format.
Inspired by Network Pruning, LightGaussian identifies Gaussians with minimal global significance on scene reconstruction.
LightGaussian achieves an average 15x compression rate while boosting FPS from 144 to 237 within the 3D-GS framework.
arXiv Detail & Related papers (2023-11-28T21:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.