Multi-view Gradient Consistency for SVBRDF Estimation of Complex Scenes
under Natural Illumination
- URL: http://arxiv.org/abs/2202.13017v1
- Date: Fri, 25 Feb 2022 23:49:39 GMT
- Title: Multi-view Gradient Consistency for SVBRDF Estimation of Complex Scenes
under Natural Illumination
- Authors: Alen Joy and Charalambos Poullis
- Abstract summary: This paper presents a process for estimating the spatially varying surface reflectance of complex scenes observed under natural illumination.
An end-to-end process uses a model of the scene's geometry and several images capturing the scene's surfaces.
Experiments show that our technique produces realistic results for arbitrary outdoor scenes with complex geometry.
- Score: 6.282068591820945
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a process for estimating the spatially varying surface
reflectance of complex scenes observed under natural illumination. In contrast
to previous methods, our process is not limited to scenes viewed under
controlled lighting conditions but can handle complex indoor and outdoor scenes
viewed under arbitrary illumination conditions. An end-to-end process uses a
model of the scene's geometry and several images capturing the scene's surfaces
from arbitrary viewpoints and under various natural illumination conditions. We
develop a differentiable path tracer that leverages least-square conformal
mapping for handling multiple disjoint objects appearing in the scene. We
follow a two-step optimization process and introduce a multi-view gradient
consistency loss which results in up to 30-50% improvement in the image
reconstruction loss and can further achieve better disentanglement of the
diffuse and specular BRDFs compared to other state-of-the-art. We demonstrate
the process in real-world indoor and outdoor scenes from images in the wild and
show that we can produce realistic renders consistent with actual images using
the estimated reflectance properties. Experiments show that our technique
produces realistic results for arbitrary outdoor scenes with complex geometry.
The source code is publicly available at:
https://gitlab.com/alen.joy/multi-view-gradient-consistency-for-svbrdf-estimation-of-complex-scenes- under-natural-illumination
Related papers
- Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - Holistic Inverse Rendering of Complex Facade via Aerial 3D Scanning [38.72679977945778]
We use multi-view aerial images to reconstruct the geometry, lighting, and material of facades using neural signed distance fields (SDFs)
The experiment demonstrates the superior quality of our method on facade holistic inverse rendering, novel view synthesis, and scene editing compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-11-20T15:03:56Z) - Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering [63.24476194987721]
Inverse rendering, the process of inferring scene properties from images, is a challenging inverse problem.
Most existing solutions incorporate priors into the inverse-rendering pipeline to encourage plausible solutions.
We propose a novel scheme that integrates a denoising probabilistic diffusion model pre-trained on natural illumination maps into an optimization framework.
arXiv Detail & Related papers (2023-09-30T12:39:28Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Shape and Reflectance Reconstruction in Uncontrolled Environments by
Differentiable Rendering [27.41344744849205]
We propose an efficient method to reconstruct the scene's 3D geometry and reflectance from multi-view photography using conventional hand-held cameras.
Our method also shows superior performance compared to state-of-the-art alternatives in novel view visually synthesis and quantitatively.
arXiv Detail & Related papers (2021-10-25T14:09:10Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images [59.53382863519189]
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
arXiv Detail & Related papers (2020-07-20T05:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.