GlossyGS: Inverse Rendering of Glossy Objects with 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2410.13349v1
- Date: Thu, 17 Oct 2024 09:00:29 GMT
- Title: GlossyGS: Inverse Rendering of Glossy Objects with 3D Gaussian Splatting
- Authors: Shuichang Lai, Letian Huang, Jie Guo, Kai Cheng, Bowen Pan, Xiaoxiao Long, Jiangjing Lyu, Chengfei Lv, Yanwen Guo,
- Abstract summary: GlossyGS aims to precisely reconstruct the geometry and materials of glossy objects by integrating material priors.
We demonstrate through quantitative analysis and qualitative visualization that the proposed method is effective to reconstruct high-fidelity geometries and materials of glossy objects.
- Score: 21.23724172779984
- License:
- Abstract: Reconstructing objects from posed images is a crucial and complex task in computer graphics and computer vision. While NeRF-based neural reconstruction methods have exhibited impressive reconstruction ability, they tend to be time-comsuming. Recent strategies have adopted 3D Gaussian Splatting (3D-GS) for inverse rendering, which have led to quick and effective outcomes. However, these techniques generally have difficulty in producing believable geometries and materials for glossy objects, a challenge that stems from the inherent ambiguities of inverse rendering. To address this, we introduce GlossyGS, an innovative 3D-GS-based inverse rendering framework that aims to precisely reconstruct the geometry and materials of glossy objects by integrating material priors. The key idea is the use of micro-facet geometry segmentation prior, which helps to reduce the intrinsic ambiguities and improve the decomposition of geometries and materials. Additionally, we introduce a normal map prefiltering strategy to more accurately simulate the normal distribution of reflective surfaces. These strategies are integrated into a hybrid geometry and material representation that employs both explicit and implicit methods to depict glossy objects. We demonstrate through quantitative analysis and qualitative visualization that the proposed method is effective to reconstruct high-fidelity geometries and materials of glossy objects, and performs favorably against state-of-the-arts.
Related papers
- GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering [83.69136534797686]
We present GUS-IR, a novel framework designed to address the inverse rendering problem for complicated scenes featuring rough and glossy surfaces.
This paper starts by analyzing and comparing two prominent shading techniques popularly used for inverse rendering, forward shading and deferred shading.
We propose a unified shading solution that combines the advantages of both techniques for better decomposition.
arXiv Detail & Related papers (2024-11-12T01:51:05Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - Multi-times Monte Carlo Rendering for Inter-reflection Reconstruction [51.911195773164245]
Inverse rendering methods have achieved remarkable performance in reconstructing high-fidelity 3D objects with disentangled geometries, materials, and environmental light.
We propose Ref-MC2 that introduces the multi-time Monte Carlo sampling which comprehensively computes the environmental illumination.
We also show downstream applications, e.g., relighting and material editing, to illustrate the disentanglement ability of our method.
arXiv Detail & Related papers (2024-07-08T09:27:34Z) - OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction [54.706361479680055]
We introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials.
OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask.
It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials.
arXiv Detail & Related papers (2024-06-13T07:46:17Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - Multi-View Neural Surface Reconstruction with Structured Light [7.709526244898887]
Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision.
We introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses.
Our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration.
arXiv Detail & Related papers (2022-11-22T03:10:46Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing [16.975014467319443]
Differentiable path tracing is an appealing framework as it can reproduce complex appearance effects.
We show how to use differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet material representation.
We also show how to refine initial reconstructions of real-world objects in unconstrained environments.
arXiv Detail & Related papers (2020-12-06T18:55:35Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.