Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields
- URL: http://arxiv.org/abs/2307.15131v2
- Date: Sun, 27 Aug 2023 01:53:45 GMT
- Title: Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields
- Authors: Xiangyu Wang, Jingsen Zhu, Qi Ye, Yuchi Huo, Yunlong Ran, Zhihua
Zhong, Jiming Chen
- Abstract summary: Seal-3D allows users to edit NeRF models in a pixel-level and free manner with a wide range of NeRF-like backbone and preview the editing effects instantly.
A NeRF editing system is built to showcase various editing types.
- Score: 14.803266838721864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the popularity of implicit neural representations, or neural radiance
fields (NeRF), there is a pressing need for editing methods to interact with
the implicit 3D models for tasks like post-processing reconstructed scenes and
3D content creation. While previous works have explored NeRF editing from
various perspectives, they are restricted in editing flexibility, quality, and
speed, failing to offer direct editing response and instant preview. The key
challenge is to conceive a locally editable neural representation that can
directly reflect the editing instructions and update instantly. To bridge the
gap, we propose a new interactive editing method and system for implicit
representations, called Seal-3D, which allows users to edit NeRF models in a
pixel-level and free manner with a wide range of NeRF-like backbone and preview
the editing effects instantly. To achieve the effects, the challenges are
addressed by our proposed proxy function mapping the editing instructions to
the original space of NeRF models in the teacher model and a two-stage training
strategy for the student model with local pretraining and global finetuning. A
NeRF editing system is built to showcase various editing types. Our system can
achieve compelling editing effects with an interactive speed of about 1 second.
Related papers
- NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural
Radiance Fields [7.678022563694719]
SealD-NeRF is an extension of Seal-3D for pixel-level editing in dynamic settings.
It allows for consistent edits across sequences by mapping editing actions to a specific timeframe.
arXiv Detail & Related papers (2024-02-21T03:45:18Z) - Customize your NeRF: Adaptive Source Driven 3D Scene Editing via
Local-Global Iterative Training [61.984277261016146]
We propose a CustomNeRF model that unifies a text description or a reference image as the editing prompt.
To tackle the first challenge, we propose a Local-Global Iterative Editing (LGIE) training scheme that alternates between foreground region editing and full-image editing.
For the second challenge, we also design a class-guided regularization that exploits class priors within the generation model to alleviate the inconsistency problem.
arXiv Detail & Related papers (2023-12-04T06:25:06Z) - ED-NeRF: Efficient Text-Guided Editing of 3D Scene with Latent Space NeRF [60.47731445033151]
We present a novel 3D NeRF editing approach dubbed ED-NeRF.
We embed real-world scenes into the latent space of the latent diffusion model (LDM) through a unique refinement layer.
This approach enables us to obtain a NeRF backbone that is not only faster but also more amenable to editing.
arXiv Detail & Related papers (2023-10-04T10:28:38Z) - Editing 3D Scenes via Text Prompts without Retraining [80.57814031701744]
DN2N is a text-driven editing method that allows for the direct acquisition of a NeRF model with universal editing capabilities.
Our method employs off-the-shelf text-based editing models of 2D images to modify the 3D scene images.
Our method achieves multiple editing types, including but not limited to appearance editing, weather transition, material changing, and style transfer.
arXiv Detail & Related papers (2023-09-10T02:31:50Z) - RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models [36.236190350126826]
We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
arXiv Detail & Related papers (2023-06-09T04:49:31Z) - FaceDNeRF: Semantics-Driven Face Reconstruction, Prompt Editing and
Relighting with Diffusion Models [67.17713009917095]
We propose Face Diffusion NeRF (FaceDNeRF), a new generative method to reconstruct high-quality Face NeRFs from single images.
With carefully designed illumination and identity preserving loss, FaceDNeRF offers users unparalleled control over the editing process.
arXiv Detail & Related papers (2023-06-01T15:14:39Z) - SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing
Field [37.8162035179377]
We present a novel semantic-driven NeRF editing approach, which enables users to edit a neural radiance field with a single image.
To achieve this goal, we propose a prior-guided editing field to encode fine-grained geometric and texture editing in 3D space.
Our method achieves photo-realistic 3D editing using only a single edited image, pushing the bound of semantic-driven editing in 3D real-world scenes.
arXiv Detail & Related papers (2023-03-23T13:58:11Z) - Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions [109.51624993088687]
We propose a method for editing NeRF scenes with text-instructions.
Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene.
We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.
arXiv Detail & Related papers (2023-03-22T17:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.