Object-Centric Neural Scene Rendering
- URL: http://arxiv.org/abs/2012.08503v1
- Date: Tue, 15 Dec 2020 18:55:02 GMT
- Title: Object-Centric Neural Scene Rendering
- Authors: Michelle Guo, Alireza Fathi, Jiajun Wu, Thomas Funkhouser
- Abstract summary: We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
- Score: 19.687759175741824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method for composing photorealistic scenes from captured images
of objects. Our work builds upon neural radiance fields (NeRFs), which
implicitly model the volumetric density and directionally-emitted radiance of a
scene. While NeRFs synthesize realistic pictures, they only model static scenes
and are closely tied to specific imaging conditions. This property makes NeRFs
hard to generalize to new scenarios, including new lighting or new arrangements
of objects. Instead of learning a scene radiance field as a NeRF does, we
propose to learn object-centric neural scattering functions (OSFs), a
representation that models per-object light transport implicitly using a
lighting- and view-dependent neural network. This enables rendering scenes even
when objects or lights move, without retraining. Combined with a volumetric
path tracing procedure, our framework is capable of rendering both intra- and
inter-object light transport effects including occlusions, specularities,
shadows, and indirect illumination. We evaluate our approach on scene
composition and show that it generalizes to novel illumination conditions,
producing photorealistic, physically accurate renderings of multi-object
scenes.
Related papers
- Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis [65.20672798704128]
We present Lighting-Aware Neural Field (LANe) for compositional synthesis of driving scenes.
We learn a scene representation that disentangles the static background and transient elements into a world-NeRF and class-specific object-NeRFs.
We demonstrate the performance of our model on a synthetic dataset of diverse lighting conditions rendered with the CARLA simulator.
arXiv Detail & Related papers (2023-04-06T17:59:25Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis [28.356700318603565]
We explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene.
By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition.
We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
arXiv Detail & Related papers (2021-04-28T03:47:48Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.