Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis
- URL: http://arxiv.org/abs/2104.13562v1
- Date: Wed, 28 Apr 2021 03:47:48 GMT
- Title: Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis
- Authors: Julian Knodt, Seung-Hwan Baek, Felix Heide
- Abstract summary: We explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene.
By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition.
We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
- Score: 28.356700318603565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent neural rendering methods have demonstrated accurate view interpolation
by predicting volumetric density and color with a neural network. Although such
volumetric representations can be supervised on static and dynamic scenes,
existing methods implicitly bake the complete scene light transport into a
single neural network for a given scene, including surface modeling,
bidirectional scattering distribution functions, and indirect lighting effects.
In contrast to traditional rendering pipelines, this prohibits changing surface
reflectance, illumination, or composing other objects in the scene.
In this work, we explicitly model the light transport between scene surfaces
and we rely on traditional integration schemes and the rendering equation to
reconstruct a scene. The proposed method allows BSDF recovery with unknown
light conditions and classic light transports such as pathtracing. By learning
decomposed transport with surface representations established in conventional
rendering methods, the method naturally facilitates editing shape, reflectance,
lighting and scene composition. The method outperforms NeRV for relighting
under known lighting conditions, and produces realistic reconstructions for
relit and edited scenes. We validate the proposed approach for scene editing,
relighting and reflectance estimation learned from synthetic and captured views
on a subset of NeRV's datasets.
Related papers
- DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NeILF++: Inter-Reflectable Light Fields for Geometry and Material
Estimation [36.09503501647977]
We formulate the lighting of a static scene as one neural incident light field (NeILF) and one outgoing neural radiance field (NeRF)
The proposed method is able to achieve state-of-the-art results in terms of geometry reconstruction quality, material estimation accuracy, and the fidelity of novel view rendering.
arXiv Detail & Related papers (2023-03-30T04:59:48Z) - ENVIDR: Implicit Differentiable Renderer with Neural Environment
Lighting [9.145875902703345]
We introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections.
We first propose a novel neural with decomposed rendering to learn the interaction between surface and environment lighting.
We then propose an SDF-based neural surface model that leverages this learned neural to represent general scenes.
arXiv Detail & Related papers (2023-03-23T04:12:07Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - NeILF: Neural Incident Light Field for Physically-based Material
Estimation [31.230609753253713]
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
arXiv Detail & Related papers (2022-03-14T15:23:04Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.