GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization
- URL: http://arxiv.org/abs/2312.05133v1
- Date: Fri, 8 Dec 2023 16:05:15 GMT
- Title: GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization
- Authors: Yahao Shi, Yanmin Wu, Chenming Wu, Xing Liu, Chen Zhao, Haocheng Feng,
Jingtuo Liu, Liangjun Zhang, Jian Zhang, Bin Zhou, Errui Ding, Jingdong Wang
- Abstract summary: GIR is a 3D Gaussian Inverse Rendering method for relightable scene factorization.
Our method utilizes 3D Gaussians to estimate the material properties, illumination, and geometry of an object from multi-view images.
- Score: 76.52007427483396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents GIR, a 3D Gaussian Inverse Rendering method for
relightable scene factorization. Compared to existing methods leveraging
discrete meshes or neural implicit fields for inverse rendering, our method
utilizes 3D Gaussians to estimate the material properties, illumination, and
geometry of an object from multi-view images. Our study is motivated by the
evidence showing that 3D Gaussian is a more promising backbone than neural
fields in terms of performance, versatility, and efficiency. In this paper, we
aim to answer the question: ``How can 3D Gaussian be applied to improve the
performance of inverse rendering?'' To address the complexity of estimating
normals based on discrete and often in-homogeneous distributed 3D Gaussian
representations, we proposed an efficient self-regularization method that
facilitates the modeling of surface normals without the need for additional
supervision. To reconstruct indirect illumination, we propose an approach that
simulates ray tracing. Extensive experiments demonstrate our proposed GIR's
superior performance over existing methods across multiple tasks on a variety
of widely used datasets in inverse rendering. This substantiates its efficacy
and broad applicability, highlighting its potential as an influential tool in
relighting and reconstruction. Project page: https://3dgir.github.io
Related papers
- GSD: View-Guided Gaussian Splatting Diffusion for 3D Reconstruction [52.04103235260539]
We present a diffusion model approach based on Gaussian Splatting representation for 3D object reconstruction from a single view.
The model learns to generate 3D objects represented by sets of GS ellipsoids.
The final reconstructed objects explicitly come with high-quality 3D structure and texture, and can be efficiently rendered in arbitrary views.
arXiv Detail & Related papers (2024-07-05T03:43:08Z) - GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction [70.65250036489128]
3D semantic occupancy prediction aims to obtain 3D fine-grained geometry and semantics of the surrounding scene.
We propose an object-centric representation to describe 3D scenes with sparse 3D semantic Gaussians.
GaussianFormer achieves comparable performance with state-of-the-art methods with only 17.8% - 24.8% of their memory consumption.
arXiv Detail & Related papers (2024-05-27T17:59:51Z) - Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and compact surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - 3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting [58.95801720309658]
In this paper, we present an implicit surface reconstruction method with 3D Gaussian Splatting (3DGS), namely 3DGSR.
The key insight is incorporating an implicit signed distance field (SDF) within 3D Gaussians to enable them to be aligned and jointly optimized.
Our experimental results demonstrate that our 3DGSR method enables high-quality 3D surface reconstruction while preserving the efficiency and rendering quality of 3DGS.
arXiv Detail & Related papers (2024-03-30T16:35:38Z) - Isotropic Gaussian Splatting for Real-Time Radiance Field Rendering [15.498640737050412]
The proposed method can be applied in a large range applications, such as 3D reconstruction, view synthesis, and dynamic object modeling.
The experiments confirm that the proposed method is about bf 100X faster without losing the geometry representation accuracy.
arXiv Detail & Related papers (2024-03-21T09:02:31Z) - Sparse-view CT Reconstruction with 3D Gaussian Volumetric Representation [13.667470059238607]
Sparse-view CT is a promising strategy for reducing the radiation dose of traditional CT scans.
Recently, 3D Gaussian has been applied to model complex natural scenes.
We investigate their potential for sparse-view CT reconstruction.
arXiv Detail & Related papers (2023-12-25T09:47:33Z) - pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction [26.72289913260324]
pixelSplat is a feed-forward model that learns to reconstruct 3D radiance fields parameterized by 3D Gaussian primitives from pairs of images.
Our model features real-time and memory-efficient rendering for scalable training as well as fast 3D reconstruction at inference time.
arXiv Detail & Related papers (2023-12-19T17:03:50Z) - GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting [51.96353586773191]
We introduce textbfGS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping system.
Our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering.
Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets.
arXiv Detail & Related papers (2023-11-20T12:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.