LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis
- URL: http://arxiv.org/abs/2304.03280v1
- Date: Thu, 6 Apr 2023 17:59:25 GMT
- Title: LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis
- Authors: Akshay Krishnan, Amit Raj, Xianling Zhang, Alexandra Carlson, Nathan
Tseng, Sandhya Sridhar, Nikita Jaipuria, James Hays
- Abstract summary: We present Lighting-Aware Neural Field (LANe) for compositional synthesis of driving scenes.
We learn a scene representation that disentangles the static background and transient elements into a world-NeRF and class-specific object-NeRFs.
We demonstrate the performance of our model on a synthetic dataset of diverse lighting conditions rendered with the CARLA simulator.
- Score: 65.20672798704128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural fields have recently enjoyed great success in representing and
rendering 3D scenes. However, most state-of-the-art implicit representations
model static or dynamic scenes as a whole, with minor variations. Existing work
on learning disentangled world and object neural fields do not consider the
problem of composing objects into different world neural fields in a
lighting-aware manner. We present Lighting-Aware Neural Field (LANe) for the
compositional synthesis of driving scenes in a physically consistent manner.
Specifically, we learn a scene representation that disentangles the static
background and transient elements into a world-NeRF and class-specific
object-NeRFs to allow compositional synthesis of multiple objects in the scene.
Furthermore, we explicitly designed both the world and object models to handle
lighting variation, which allows us to compose objects into scenes with
spatially varying lighting. This is achieved by constructing a light field of
the scene and using it in conjunction with a learned shader to modulate the
appearance of the object NeRFs. We demonstrate the performance of our model on
a synthetic dataset of diverse lighting conditions rendered with the CARLA
simulator, as well as a novel real-world dataset of cars collected at different
times of the day. Our approach shows that it outperforms state-of-the-art
compositional scene synthesis on the challenging dataset setup, via composing
object-NeRFs learned from one scene into an entirely different scene whilst
still respecting the lighting variations in the novel scene. For more results,
please visit our project website https://lane-composition.github.io/.
Related papers
- Set-the-Scene: Global-Local Training for Generating Controllable NeRF
Scenes [68.14127205949073]
We propose a novel GlobalLocal training framework for synthesizing a 3D scene using object proxies.
We show that using proxies allows a wide variety of editing options, such as adjusting the placement of each independent object.
Our results show that Set-the-Scene offers a powerful solution for scene synthesis and manipulation.
arXiv Detail & Related papers (2023-03-23T17:17:29Z) - DisCoScene: Spatially Disentangled Generative Radiance Fields for
Controllable 3D-aware Scene Synthesis [90.32352050266104]
DisCoScene is a 3Daware generative model for high-quality and controllable scene synthesis.
It disentangles the whole scene into object-centric generative fields by learning on only 2D images with the global-local discrimination.
We demonstrate state-of-the-art performance on many scene datasets, including the challenging outdoor dataset.
arXiv Detail & Related papers (2022-12-22T18:59:59Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Unsupervised Discovery and Composition of Object Light Fields [57.198174741004095]
We propose to represent objects in an object-centric, compositional scene representation as light fields.
We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields.
arXiv Detail & Related papers (2022-05-08T17:50:35Z) - Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering [42.37007176376849]
We present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering for a clustered and real-world scene.
To survive the training in heavily cluttered scenes, we propose a scene-guided training strategy to solve the 3D space ambiguity in the occluded regions and learn sharp boundaries for each object.
arXiv Detail & Related papers (2021-09-04T11:37:18Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - GIRAFFE: Representing Scenes as Compositional Generative Neural Feature
Fields [45.21191307444531]
Deep generative models allow for photorealistic image synthesis at high resolutions.
But for many applications, this is not enough: content creation also needs to be controllable.
Our key hypothesis is that incorporating a compositional 3D scene representation into the generative model leads to more controllable image synthesis.
arXiv Detail & Related papers (2020-11-24T14:14:15Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.