Neural-PBIR Reconstruction of Shape, Material, and Illumination
- URL: http://arxiv.org/abs/2304.13445v5
- Date: Thu, 1 Feb 2024 08:47:44 GMT
- Title: Neural-PBIR Reconstruction of Shape, Material, and Illumination
- Authors: Cheng Sun, Guangyan Cai, Zhengqin Li, Kai Yan, Cheng Zhang, Carl
Marshall, Jia-Bin Huang, Shuang Zhao, Zhao Dong
- Abstract summary: We introduce an accurate and highly efficient object reconstruction pipeline combining neural based object reconstruction and physics-based inverse rendering (PBIR)
Our pipeline firstly leverages a neural SDF based shape reconstruction to produce high-quality but potentially imperfect object shape.
In the last stage, by the neural predictions, we perform PBIR to refine the initial results and obtain the final high-quality reconstruction of object shape, material, and illumination.
- Score: 26.628189591572074
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reconstructing the shape and spatially varying surface appearances of a
physical-world object as well as its surrounding illumination based on 2D
images (e.g., photographs) of the object has been a long-standing problem in
computer vision and graphics. In this paper, we introduce an accurate and
highly efficient object reconstruction pipeline combining neural based object
reconstruction and physics-based inverse rendering (PBIR). Our pipeline firstly
leverages a neural SDF based shape reconstruction to produce high-quality but
potentially imperfect object shape. Then, we introduce a neural material and
lighting distillation stage to achieve high-quality predictions for material
and illumination. In the last stage, initialized by the neural predictions, we
perform PBIR to refine the initial results and obtain the final high-quality
reconstruction of object shape, material, and illumination. Experimental
results demonstrate our pipeline significantly outperforms existing methods
quality-wise and performance-wise.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering [26.988572852463815]
In this paper, we propose a novel end-to-end relightable neural inverse rendering system.
Our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
arXiv Detail & Related papers (2024-09-30T09:42:10Z) - OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction [54.706361479680055]
We introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials.
OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask.
It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials.
arXiv Detail & Related papers (2024-06-13T07:46:17Z) - PhyRecon: Physically Plausible Neural Scene Reconstruction [81.73129450090684]
We introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations.
Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points.
Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets.
arXiv Detail & Related papers (2024-04-25T15:06:58Z) - Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields [45.64333510966844]
Inverse rendering aims at recovering both geometry and materials of objects.
We propose a novel 5D Neural Plenoptic Function (NeP) based on NeRFs and ray tracing.
Our method can reconstruct high-fidelity geometry/materials of challenging glossy objects with complex lighting interactions from nearby objects.
arXiv Detail & Related papers (2024-03-24T16:34:47Z) - SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild [76.21063993398451]
Inverse rendering of an object based on unconstrained image collections is a long-standing challenge in computer vision and graphics.
We show that an implicit shape representation based on a multi-resolution hash encoding enables faster and robust shape reconstruction.
Our method is class-agnostic and works on in-the-wild image collections of objects to produce relightable 3D assets.
arXiv Detail & Related papers (2024-01-18T18:01:19Z) - NePF: Neural Photon Field for Single-Stage Inverse Rendering [6.977356702921476]
We present a novel single-stage framework, Neural Photon Field (NePF), to address the ill-posed inverse rendering from multi-view images.
NePF achieves this unification by fully utilizing the physical implication behind the weight function of neural implicit surfaces.
We evaluate our method on both real and synthetic datasets.
arXiv Detail & Related papers (2023-11-20T06:15:46Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - NeROIC: Neural Rendering of Objects from Online Image Collections [42.02832046768925]
We present a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects.
This enables various object-centric rendering applications such as novel-view synthesis, relighting, and harmonized background composition.
arXiv Detail & Related papers (2022-01-07T16:45:15Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.