Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering
- URL: http://arxiv.org/abs/2109.01847v1
- Date: Sat, 4 Sep 2021 11:37:18 GMT
- Title: Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering
- Authors: Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao,
Guofeng Zhang, Zhaopeng Cui
- Abstract summary: We present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering for a clustered and real-world scene.
To survive the training in heavily cluttered scenes, we propose a scene-guided training strategy to solve the 3D space ambiguity in the occluded regions and learn sharp boundaries for each object.
- Score: 42.37007176376849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit neural rendering techniques have shown promising results for novel
view synthesis. However, existing methods usually encode the entire scene as a
whole, which is generally not aware of the object identity and limits the
ability to the high-level editing tasks such as moving or adding furniture. In
this paper, we present a novel neural scene rendering system, which learns an
object-compositional neural radiance field and produces realistic rendering
with editing capability for a clustered and real-world scene. Specifically, we
design a novel two-pathway architecture, in which the scene branch encodes the
scene geometry and appearance, and the object branch encodes each standalone
object conditioned on learnable object activation codes. To survive the
training in heavily cluttered scenes, we propose a scene-guided training
strategy to solve the 3D space ambiguity in the occluded regions and learn
sharp boundaries for each object. Extensive experiments demonstrate that our
system not only achieves competitive performance for static scene novel-view
synthesis, but also produces realistic rendering for object-level editing.
Related papers
- Neural Implicit Field Editing Considering Object-environment Interaction [5.285267388811263]
We propose an Object and Scene environment Interaction aware (OSI-aware) system.
It is a novel two-stream neural rendering system considering object and scene environment interaction.
It achieves competitive performance for the rendering quality in novel-view synthesis tasks.
arXiv Detail & Related papers (2023-11-01T10:35:47Z) - LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis [65.20672798704128]
We present Lighting-Aware Neural Field (LANe) for compositional synthesis of driving scenes.
We learn a scene representation that disentangles the static background and transient elements into a world-NeRF and class-specific object-NeRFs.
We demonstrate the performance of our model on a synthetic dataset of diverse lighting conditions rendered with the CARLA simulator.
arXiv Detail & Related papers (2023-04-06T17:59:25Z) - Set-the-Scene: Global-Local Training for Generating Controllable NeRF
Scenes [68.14127205949073]
We propose a novel GlobalLocal training framework for synthesizing a 3D scene using object proxies.
We show that using proxies allows a wide variety of editing options, such as adjusting the placement of each independent object.
Our results show that Set-the-Scene offers a powerful solution for scene synthesis and manipulation.
arXiv Detail & Related papers (2023-03-23T17:17:29Z) - DisCoScene: Spatially Disentangled Generative Radiance Fields for
Controllable 3D-aware Scene Synthesis [90.32352050266104]
DisCoScene is a 3Daware generative model for high-quality and controllable scene synthesis.
It disentangles the whole scene into object-centric generative fields by learning on only 2D images with the global-local discrimination.
We demonstrate state-of-the-art performance on many scene datasets, including the challenging outdoor dataset.
arXiv Detail & Related papers (2022-12-22T18:59:59Z) - Unsupervised Discovery and Composition of Object Light Fields [57.198174741004095]
We propose to represent objects in an object-centric, compositional scene representation as light fields.
We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields.
arXiv Detail & Related papers (2022-05-08T17:50:35Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.