Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video
- URL: http://arxiv.org/abs/2012.12247v3
- Date: Fri, 26 Feb 2021 18:08:43 GMT
- Title: Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video
- Authors: Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollh\"ofer,
Christoph Lassner, Christian Theobalt
- Abstract summary: Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
- Score: 76.19076002661157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction and
novel view synthesis approach for general non-rigid dynamic scenes. Our
approach takes RGB images of a dynamic scene as input, e.g., from a monocular
video recording, and creates a high-quality space-time geometry and appearance
representation. In particular, we show that even a single handheld
consumer-grade camera is sufficient to synthesize sophisticated renderings of a
dynamic scene from novel virtual camera views, for example a `bullet-time'
video effect. Our method disentangles the dynamic scene into a canonical volume
and its deformation. Scene deformation is implemented as ray bending, where
straight rays are deformed non-rigidly to represent scene motion. We also
propose a novel rigidity regression network that enables us to better constrain
rigid regions of the scene, which leads to more stable results. The ray bending
and rigidity network are trained without any explicit supervision. In addition
to novel view synthesis, our formulation enables dense correspondence
estimation across views and time, as well as compelling video editing
applications such as motion exaggeration. We demonstrate the effectiveness of
our method using extensive evaluations, including ablation studies and
comparisons to the state of the art. We urge the reader to watch the
supplemental video for qualitative results. Our code will be open sourced.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Fast View Synthesis of Casual Videos with Soup-of-Planes [24.35962788109883]
Novel view synthesis from an in-the-wild video is difficult due to challenges like scene dynamics and lack of parallax.
This paper revisits explicit video representations to synthesize high-quality novel views from a monocular video efficiently.
Our method can render high-quality novel views from an in-the-wild video with comparable quality to state-of-the-art methods while being 100x faster in training and enabling real-time rendering.
arXiv Detail & Related papers (2023-12-04T18:55:48Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Editable Free-viewpoint Video Using a Layered Neural Representation [35.44420164057911]
We propose the first approach for editable free-viewpoint video generation for large-scale dynamic scenes using only sparse 16 cameras.
The core of our approach is a new layered neural representation, where each dynamic entity including the environment itself is formulated into a space-time coherent neural layered radiance representation called ST-NeRF.
Experiments demonstrate the effectiveness of our approach to achieve high-quality, photo-realistic, and editable free-viewpoint video generation for dynamic scenes.
arXiv Detail & Related papers (2021-04-30T06:50:45Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.