Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields
- URL: http://arxiv.org/abs/2308.11974v2
- Date: Mon, 11 Sep 2023 13:18:55 GMT
- Title: Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields
- Authors: Hyeonseop Song, Seokhun Choi, Hoseok Do, Chul Lee, Taehyeong Kim
- Abstract summary: We propose a novel NeRF-based model, Blending-NeRF, which consists of two NeRF networks: pretrained NeRF and editable NeRF.
We introduce new blending operations that allow Blending-NeRF to properly edit target regions which are localized by text.
Our experiments demonstrate that Blending-NeRF produces naturally and locally edited 3D objects from various text prompts.
- Score: 16.375242125946965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-driven localized editing of 3D objects is particularly difficult as
locally mixing the original 3D object with the intended new object and style
effects without distorting the object's form is not a straightforward process.
To address this issue, we propose a novel NeRF-based model, Blending-NeRF,
which consists of two NeRF networks: pretrained NeRF and editable NeRF.
Additionally, we introduce new blending operations that allow Blending-NeRF to
properly edit target regions which are localized by text. By using a pretrained
vision-language aligned model, CLIP, we guide Blending-NeRF to add new objects
with varying colors and densities, modify textures, and remove parts of the
original object. Our extensive experiments demonstrate that Blending-NeRF
produces naturally and locally edited 3D objects from various text prompts. Our
project page is available at https://seokhunchoi.github.io/Blending-NeRF/
Related papers
- NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - GenN2N: Generative NeRF2NeRF Translation [53.20986183316661]
GenN2N is a unified NeRF-to-NeRF translation framework for various NeRF translation tasks.
It employs a plug-and-play image-to-image translator to perform editing in the 2D domain and lifting 2D edits into the 3D NeRF space.
arXiv Detail & Related papers (2024-04-03T14:56:06Z) - GO-NeRF: Generating Virtual Objects in Neural Radiance Fields [75.13534508391852]
GO-NeRF is capable of utilizing scene context for high-quality and harmonious 3D object generation within an existing NeRF.
Our method employs a compositional rendering formulation that allows the generated 3D objects to be seamlessly composited into the scene.
arXiv Detail & Related papers (2024-01-11T08:58:13Z) - Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields [14.803266838721864]
Seal-3D allows users to edit NeRF models in a pixel-level and free manner with a wide range of NeRF-like backbone and preview the editing effects instantly.
A NeRF editing system is built to showcase various editing types.
arXiv Detail & Related papers (2023-07-27T18:08:19Z) - Blended-NeRF: Zero-Shot Object Generation and Blending in Existing
Neural Radiance Fields [26.85599376826124]
We present Blended-NeRF, a framework for editing a specific region of interest in an existing NeRF scene.
We allow local editing by localizing a 3D ROI box in the input scene, and blend the content synthesized inside the ROI with the existing scene.
We show our framework for several 3D editing applications, including adding new objects to a scene, removing/altering existing objects, and texture conversion.
arXiv Detail & Related papers (2023-06-22T09:34:55Z) - RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models [36.236190350126826]
We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
arXiv Detail & Related papers (2023-06-09T04:49:31Z) - SKED: Sketch-guided Text-based 3D Editing [49.019881133348775]
We present SKED, a technique for editing 3D shapes represented by NeRFs.
Our technique utilizes as few as two guiding sketches from different views to alter an existing neural field.
We propose novel loss functions to generate the desired edits while preserving the density and radiance of the base instance.
arXiv Detail & Related papers (2023-03-19T18:40:44Z) - Removing Objects From Neural Radiance Fields [60.067117643543824]
We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence.
Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask.
We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner.
arXiv Detail & Related papers (2022-12-22T18:51:06Z) - NeRF-Loc: Transformer-Based Object Localization Within Neural Radiance
Fields [62.89785701659139]
We propose a transformer-based framework, NeRF-Loc, to extract 3D bounding boxes of objects in NeRF scenes.
NeRF-Loc takes a pre-trained NeRF model and camera view as input and produces labeled, oriented 3D bounding boxes of objects as output.
arXiv Detail & Related papers (2022-09-24T18:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.