Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient
- URL: http://arxiv.org/abs/2306.09322v1
- Date: Thu, 15 Jun 2023 17:56:04 GMT
- Title: Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient
- Authors: Shizhan Zhu, Shunsuke Saito, Aljaz Bozic, Carlos Aliaga, Trevor
Darrell, Christop Lassner
- Abstract summary: We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
- Score: 73.52585139592398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing and relighting objects and scenes under varying lighting
conditions is challenging: existing neural rendering methods often cannot
handle the complex interactions between materials and light. Incorporating
pre-computed radiance transfer techniques enables global illumination, but
still struggles with materials with subsurface scattering effects. We propose a
novel framework for learning the radiance transfer field via volume rendering
and utilizing various appearance cues to refine geometry end-to-end. This
framework extends relighting and reconstruction capabilities to handle a wider
range of materials in a data-driven fashion. The resulting models produce
plausible rendering results in existing and novel conditions. We will release
our code and a novel light stage dataset of objects with subsurface scattering
effects publicly available.
Related papers
- NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Microfacet Fields for Inverse Rendering [54.15870869037466]
We present a method for recovering materials, geometry, and environment illumination from images of a scene.
Our method uses a microfacet reflectance model within a volumetric setting by treating each sample along the ray as a (potentially non-opaque) surface.
arXiv Detail & Related papers (2023-03-31T05:38:13Z) - NeILF++: Inter-Reflectable Light Fields for Geometry and Material
Estimation [36.09503501647977]
We formulate the lighting of a static scene as one neural incident light field (NeILF) and one outgoing neural radiance field (NeRF)
The proposed method is able to achieve state-of-the-art results in terms of geometry reconstruction quality, material estimation accuracy, and the fidelity of novel view rendering.
arXiv Detail & Related papers (2023-03-30T04:59:48Z) - NeILF: Neural Incident Light Field for Physically-based Material
Estimation [31.230609753253713]
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
arXiv Detail & Related papers (2022-03-14T15:23:04Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.