Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields
- URL: http://arxiv.org/abs/2307.12909v1
- Date: Mon, 24 Jul 2023 16:08:32 GMT
- Title: Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields
- Authors: Shangzhan Zhang, Sida Peng, Yinji ShenTu, Qing Shuai, Tianrun Chen,
Kaicheng Yu, Hujun Bao, Xiaowei Zhou
- Abstract summary: We propose a novel framework to edit the local appearance of dynamic NeRFs by manipulating pixels in a single frame of training video.
By employing our method, users without professional expertise can easily add desired content to the appearance of a dynamic scene.
- Score: 43.28899303348589
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the editing of neural radiance fields (NeRFs) has gained
considerable attention, but most prior works focus on static scenes while
research on the appearance editing of dynamic scenes is relatively lacking. In
this paper, we propose a novel framework to edit the local appearance of
dynamic NeRFs by manipulating pixels in a single frame of training video.
Specifically, to locally edit the appearance of dynamic NeRFs while preserving
unedited regions, we introduce a local surface representation of the edited
region, which can be inserted into and rendered along with the original NeRF
and warped to arbitrary other frames through a learned invertible motion
representation network. By employing our method, users without professional
expertise can easily add desired content to the appearance of a dynamic scene.
We extensively evaluate our approach on various scenes and show that our
approach achieves spatially and temporally consistent editing results. Notably,
our approach is versatile and applicable to different variants of dynamic NeRF
representations.
Related papers
- DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural
Radiance Fields [7.678022563694719]
SealD-NeRF is an extension of Seal-3D for pixel-level editing in dynamic settings.
It allows for consistent edits across sequences by mapping editing actions to a specific timeframe.
arXiv Detail & Related papers (2024-02-21T03:45:18Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - EditableNeRF: Editing Topologically Varying Neural Radiance Fields by
Key Points [7.4100592531979625]
We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes.
Our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points.
Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence.
arXiv Detail & Related papers (2022-12-07T06:08:03Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Editable Free-viewpoint Video Using a Layered Neural Representation [35.44420164057911]
We propose the first approach for editable free-viewpoint video generation for large-scale dynamic scenes using only sparse 16 cameras.
The core of our approach is a new layered neural representation, where each dynamic entity including the environment itself is formulated into a space-time coherent neural layered radiance representation called ST-NeRF.
Experiments demonstrate the effectiveness of our approach to achieve high-quality, photo-realistic, and editable free-viewpoint video generation for dynamic scenes.
arXiv Detail & Related papers (2021-04-30T06:50:45Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.