SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural
Radiance Fields
- URL: http://arxiv.org/abs/2402.13510v1
- Date: Wed, 21 Feb 2024 03:45:18 GMT
- Title: SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural
Radiance Fields
- Authors: Zhentao Huang, Yukun Shi, Neil Bruce, Minglun Gong
- Abstract summary: SealD-NeRF is an extension of Seal-3D for pixel-level editing in dynamic settings.
It allows for consistent edits across sequences by mapping editing actions to a specific timeframe.
- Score: 7.678022563694719
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of implicit neural representations, especially Neural
Radiance Fields (NeRF), highlights a growing need for editing capabilities in
implicit 3D models, essential for tasks like scene post-processing and 3D
content creation. Despite previous efforts in NeRF editing, challenges remain
due to limitations in editing flexibility and quality. The key issue is
developing a neural representation that supports local edits for real-time
updates. Current NeRF editing methods, offering pixel-level adjustments or
detailed geometry and color modifications, are mostly limited to static scenes.
This paper introduces SealD-NeRF, an extension of Seal-3D for pixel-level
editing in dynamic settings, specifically targeting the D-NeRF network. It
allows for consistent edits across sequences by mapping editing actions to a
specific timeframe, freezing the deformation network responsible for dynamic
scene representation, and using a teacher-student approach to integrate
changes.
Related papers
- IReNe: Instant Recoloring of Neural Radiance Fields [54.94866137102324]
We introduce IReNe, enabling swift, near real-time color editing in NeRF.
We leverage a pre-trained NeRF model and a single training image with user-applied color edits.
This adjustment allows the model to generate new scene views, accurately representing the color changes from the training image.
arXiv Detail & Related papers (2024-05-30T09:30:28Z) - NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - DATENeRF: Depth-Aware Text-based Editing of NeRFs [49.08848777124736]
We introduce an inpainting approach that leverages the depth information of NeRF scenes to distribute 2D edits across different images.
Our results reveal that this methodology achieves more consistent, lifelike, and detailed edits than existing leading methods for text-driven NeRF scene editing.
arXiv Detail & Related papers (2024-04-06T06:48:16Z) - Customize your NeRF: Adaptive Source Driven 3D Scene Editing via
Local-Global Iterative Training [61.984277261016146]
We propose a CustomNeRF model that unifies a text description or a reference image as the editing prompt.
To tackle the first challenge, we propose a Local-Global Iterative Editing (LGIE) training scheme that alternates between foreground region editing and full-image editing.
For the second challenge, we also design a class-guided regularization that exploits class priors within the generation model to alleviate the inconsistency problem.
arXiv Detail & Related papers (2023-12-04T06:25:06Z) - 4D-Editor: Interactive Object-level Editing in Dynamic Neural Radiance
Fields via Semantic Distillation [2.027159474140712]
We propose 4D-Editor, an interactive semantic-driven editing framework, for editing dynamic NeRFs.
We propose an extension to the original dynamic NeRF by incorporating a hybrid semantic feature distillation to maintain spatial-temporal consistency after editing.
In addition, we develop Multi-view Reprojection Inpainting to fill holes caused by incomplete scene capture after editing.
arXiv Detail & Related papers (2023-10-25T02:20:03Z) - ProteusNeRF: Fast Lightweight NeRF Editing using 3D-Aware Image Context [26.07841568311428]
We present a very simple but effective neural network architecture that is fast and efficient while maintaining a low memory footprint.
Our representation allows straightforward object selection via semantic feature distillation at the training stage.
We propose a local 3D-aware image context to facilitate view-consistent image editing that can then be distilled into fine-tuned NeRFs.
arXiv Detail & Related papers (2023-10-15T21:54:45Z) - ED-NeRF: Efficient Text-Guided Editing of 3D Scene with Latent Space NeRF [60.47731445033151]
We present a novel 3D NeRF editing approach dubbed ED-NeRF.
We embed real-world scenes into the latent space of the latent diffusion model (LDM) through a unique refinement layer.
This approach enables us to obtain a NeRF backbone that is not only faster but also more amenable to editing.
arXiv Detail & Related papers (2023-10-04T10:28:38Z) - Editing 3D Scenes via Text Prompts without Retraining [80.57814031701744]
DN2N is a text-driven editing method that allows for the direct acquisition of a NeRF model with universal editing capabilities.
Our method employs off-the-shelf text-based editing models of 2D images to modify the 3D scene images.
Our method achieves multiple editing types, including but not limited to appearance editing, weather transition, material changing, and style transfer.
arXiv Detail & Related papers (2023-09-10T02:31:50Z) - Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields [14.803266838721864]
Seal-3D allows users to edit NeRF models in a pixel-level and free manner with a wide range of NeRF-like backbone and preview the editing effects instantly.
A NeRF editing system is built to showcase various editing types.
arXiv Detail & Related papers (2023-07-27T18:08:19Z) - RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models [36.236190350126826]
We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
arXiv Detail & Related papers (2023-06-09T04:49:31Z) - SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing
Field [37.8162035179377]
We present a novel semantic-driven NeRF editing approach, which enables users to edit a neural radiance field with a single image.
To achieve this goal, we propose a prior-guided editing field to encode fine-grained geometric and texture editing in 3D space.
Our method achieves photo-realistic 3D editing using only a single edited image, pushing the bound of semantic-driven editing in 3D real-world scenes.
arXiv Detail & Related papers (2023-03-23T13:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.