EditableNeRF: Editing Topologically Varying Neural Radiance Fields by
Key Points
- URL: http://arxiv.org/abs/2212.04247v2
- Date: Tue, 28 Mar 2023 05:14:33 GMT
- Title: EditableNeRF: Editing Topologically Varying Neural Radiance Fields by
Key Points
- Authors: Chengwei Zheng, Wenbin Lin, Feng Xu
- Abstract summary: We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes.
Our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points.
Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence.
- Score: 7.4100592531979625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRF) achieve highly photo-realistic novel-view
synthesis, but it's a challenging problem to edit the scenes modeled by
NeRF-based methods, especially for dynamic scenes. We propose editable neural
radiance fields that enable end-users to easily edit dynamic scenes and even
support topological changes. Input with an image sequence from a single camera,
our network is trained fully automatically and models topologically varying
dynamics using our picked-out surface key points. Then end-users can edit the
scene by easily dragging the key points to desired new positions. To achieve
this, we propose a scene analysis method to detect and initialize key points by
considering the dynamics in the scene, and a weighted key points strategy to
model topologically varying dynamics by joint key points and weights
optimization. Our method supports intuitive multi-dimensional (up to 3D)
editing and can generate novel scenes that are unseen in the input sequence.
Experiments demonstrate that our method achieves high-quality editing on
various dynamic scenes and outperforms the state-of-the-art. Our code and
captured data are available at https://chengwei-zheng.github.io/EditableNeRF/.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields [43.28899303348589]
We propose a novel framework to edit the local appearance of dynamic NeRFs by manipulating pixels in a single frame of training video.
By employing our method, users without professional expertise can easily add desired content to the appearance of a dynamic scene.
arXiv Detail & Related papers (2023-07-24T16:08:32Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - NeRF-Editing: Geometry Editing of Neural Radiance Fields [43.256317094173795]
Implicit neural rendering has shown great potential in novel view synthesis of a scene.
We propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene.
Our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
arXiv Detail & Related papers (2022-05-10T15:35:52Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Editing Conditional Radiance Fields [40.685602081728554]
A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene.
In this paper, we explore enabling user editing of a category-level NeRF trained on a shape category.
We introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region.
arXiv Detail & Related papers (2021-05-13T17:59:48Z) - Editable Free-viewpoint Video Using a Layered Neural Representation [35.44420164057911]
We propose the first approach for editable free-viewpoint video generation for large-scale dynamic scenes using only sparse 16 cameras.
The core of our approach is a new layered neural representation, where each dynamic entity including the environment itself is formulated into a space-time coherent neural layered radiance representation called ST-NeRF.
Experiments demonstrate the effectiveness of our approach to achieve high-quality, photo-realistic, and editable free-viewpoint video generation for dynamic scenes.
arXiv Detail & Related papers (2021-04-30T06:50:45Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.