Neural Scene Graphs for Dynamic Scenes
- URL: http://arxiv.org/abs/2011.10379v3
- Date: Fri, 5 Mar 2021 16:21:16 GMT
- Title: Neural Scene Graphs for Dynamic Scenes
- Authors: Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, Felix Heide
- Abstract summary: We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
- Score: 57.65413768984925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent implicit neural rendering methods have demonstrated that it is
possible to learn accurate view synthesis for complex scenes by predicting
their volumetric density and color supervised solely by a set of RGB images.
However, existing methods are restricted to learning efficient representations
of static scenes that encode all scene objects into a single neural network,
and lack the ability to represent dynamic scenes and decompositions into
individual scene objects. In this work, we present the first neural rendering
method that decomposes dynamic scenes into scene graphs. We propose a learned
scene graph representation, which encodes object transformation and radiance,
to efficiently render novel arrangements and views of the scene. To this end,
we learn implicitly encoded scenes, combined with a jointly learned latent
representation to describe objects with a single implicit function. We assess
the proposed method on synthetic and real automotive data, validating that our
approach learns dynamic scenes -- only by observing a video of this scene --
and allows for rendering novel photo-realistic views of novel scene
compositions with unseen sets of objects at unseen poses.
Related papers
- Factored Neural Representation for Scene Understanding [39.66967677639173]
We introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations.
We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable.
arXiv Detail & Related papers (2023-04-21T13:40:30Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Neural Groundplans: Persistent Neural Scene Representations from a
Single Image [90.04272671464238]
We present a method to map 2D image observations of a scene to a persistent 3D scene representation.
We propose conditional neural groundplans as persistent and memory-efficient scene representations.
arXiv Detail & Related papers (2022-07-22T17:41:24Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering [42.37007176376849]
We present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering for a clustered and real-world scene.
To survive the training in heavily cluttered scenes, we propose a scene-guided training strategy to solve the 3D space ambiguity in the occluded regions and learn sharp boundaries for each object.
arXiv Detail & Related papers (2021-09-04T11:37:18Z) - STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in
Motion with Neural Rendering [9.600908665766465]
We present STaR, a novel method that performs Self-supervised Tracking and Reconstruction of dynamic scenes with rigid motion from multi-view RGB videos without any manual annotation.
We show that our method can render photorealistic novel views, where novelty is measured on both spatial and temporal axes.
arXiv Detail & Related papers (2020-12-22T23:45:28Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.