Learning Unified Decompositional and Compositional NeRF for Editable
Novel View Synthesis
- URL: http://arxiv.org/abs/2308.02840v1
- Date: Sat, 5 Aug 2023 10:42:05 GMT
- Title: Learning Unified Decompositional and Compositional NeRF for Editable
Novel View Synthesis
- Authors: Yuxin Wang, Wayne Wu, Dan Xu
- Abstract summary: Implicit neural representations have shown powerful capacity in modeling real-world 3D scenes, offering superior performance in novel view synthesis.
We propose a unified Neural Radiance Field (NeRF) framework to effectively perform joint scene decomposition and composition.
- Score: 37.98068169673019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representations have shown powerful capacity in modeling
real-world 3D scenes, offering superior performance in novel view synthesis. In
this paper, we target a more challenging scenario, i.e., joint scene novel view
synthesis and editing based on implicit neural scene representations.
State-of-the-art methods in this direction typically consider building separate
networks for these two tasks (i.e., view synthesis and editing). Thus, the
modeling of interactions and correlations between these two tasks is very
limited, which, however, is critical for learning high-quality scene
representations. To tackle this problem, in this paper, we propose a unified
Neural Radiance Field (NeRF) framework to effectively perform joint scene
decomposition and composition for modeling real-world scenes. The decomposition
aims at learning disentangled 3D representations of different objects and the
background, allowing for scene editing, while scene composition models an
entire scene representation for novel view synthesis. Specifically, with a
two-stage NeRF framework, we learn a coarse stage for predicting a global
radiance field as guidance for point sampling, and in the second fine-grained
stage, we perform scene decomposition by a novel one-hot object radiance field
regularization module and a pseudo supervision via inpainting to handle
ambiguous background regions occluded by objects. The decomposed object-level
radiance fields are further composed by using activations from the
decomposition module. Extensive quantitative and qualitative results show the
effectiveness of our method for scene decomposition and composition,
outperforming state-of-the-art methods for both novel-view synthesis and
editing tasks.
Related papers
- Unifying Correspondence, Pose and NeRF for Pose-Free Novel View Synthesis from Stereo Pairs [57.492124844326206]
This work delves into the task of pose-free novel view synthesis from stereo pairs, a challenging and pioneering task in 3D vision.
Our innovative framework, unlike any before, seamlessly integrates 2D correspondence matching, camera pose estimation, and NeRF rendering, fostering a synergistic enhancement of these tasks.
arXiv Detail & Related papers (2023-12-12T13:22:44Z) - LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis [65.20672798704128]
We present Lighting-Aware Neural Field (LANe) for compositional synthesis of driving scenes.
We learn a scene representation that disentangles the static background and transient elements into a world-NeRF and class-specific object-NeRFs.
We demonstrate the performance of our model on a synthetic dataset of diverse lighting conditions rendered with the CARLA simulator.
arXiv Detail & Related papers (2023-04-06T17:59:25Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering [42.37007176376849]
We present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering for a clustered and real-world scene.
To survive the training in heavily cluttered scenes, we propose a scene-guided training strategy to solve the 3D space ambiguity in the occluded regions and learn sharp boundaries for each object.
arXiv Detail & Related papers (2021-09-04T11:37:18Z) - GIRAFFE: Representing Scenes as Compositional Generative Neural Feature
Fields [45.21191307444531]
Deep generative models allow for photorealistic image synthesis at high resolutions.
But for many applications, this is not enough: content creation also needs to be controllable.
Our key hypothesis is that incorporating a compositional 3D scene representation into the generative model leads to more controllable image synthesis.
arXiv Detail & Related papers (2020-11-24T14:14:15Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.