RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models
- URL: http://arxiv.org/abs/2306.05668v2
- Date: Fri, 8 Dec 2023 02:05:04 GMT
- Title: RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models
- Authors: Xingchen Zhou, Ying He, F. Richard Yu, Jianqiang Li, You Li
- Abstract summary: We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
- Score: 36.236190350126826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of Neural Radiance Fields (NeRF) has promoted the development
of synthesized high-fidelity views of the intricate real world. However, it is
still a very demanding task to repaint the content in NeRF. In this paper, we
propose a novel framework that can take RGB images as input and alter the 3D
content in neural scenes. Our work leverages existing diffusion models to guide
changes in the designated 3D content. Specifically, we semantically select the
target object and a pre-trained diffusion model will guide the NeRF model to
generate new 3D objects, which can improve the editability, diversity, and
application range of NeRF. Experiment results show that our algorithm is
effective for editing 3D objects in NeRF under different text prompts,
including editing appearance, shape, and more. We validate our method on both
real-world datasets and synthetic-world datasets for these editing tasks.
Please visit https://starstesla.github.io/repaintnerf for a better view of our
results.
Related papers
- Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - GenN2N: Generative NeRF2NeRF Translation [53.20986183316661]
GenN2N is a unified NeRF-to-NeRF translation framework for various NeRF translation tasks.
It employs a plug-and-play image-to-image translator to perform editing in the 2D domain and lifting 2D edits into the 3D NeRF space.
arXiv Detail & Related papers (2024-04-03T14:56:06Z) - SIGNeRF: Scene Integrated Generation for Neural Radiance Fields [1.1037667460077816]
We propose a novel approach for fast and controllable NeRF scene editing and scene-integrated object generation.
A new generative update strategy ensures 3D consistency across the edited images, without requiring iterative optimization.
By exploiting the depth conditioning mechanism of the image diffusion model, we gain fine control over the spatial location of the edit.
arXiv Detail & Related papers (2024-01-03T09:46:43Z) - Inpaint4DNeRF: Promptable Spatio-Temporal NeRF Inpainting with
Generative Diffusion Models [59.96172701917538]
Current Neural Radiance Fields (NeRF) can generate photorealistic novel views.
This paper proposes Inpaint4DNeRF to capitalize on state-of-the-art stable diffusion models.
arXiv Detail & Related papers (2023-12-30T11:26:55Z) - Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields [16.375242125946965]
We propose a novel NeRF-based model, Blending-NeRF, which consists of two NeRF networks: pretrained NeRF and editable NeRF.
We introduce new blending operations that allow Blending-NeRF to properly edit target regions which are localized by text.
Our experiments demonstrate that Blending-NeRF produces naturally and locally edited 3D objects from various text prompts.
arXiv Detail & Related papers (2023-08-23T07:46:44Z) - Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields [14.803266838721864]
Seal-3D allows users to edit NeRF models in a pixel-level and free manner with a wide range of NeRF-like backbone and preview the editing effects instantly.
A NeRF editing system is built to showcase various editing types.
arXiv Detail & Related papers (2023-07-27T18:08:19Z) - FaceDNeRF: Semantics-Driven Face Reconstruction, Prompt Editing and
Relighting with Diffusion Models [67.17713009917095]
We propose Face Diffusion NeRF (FaceDNeRF), a new generative method to reconstruct high-quality Face NeRFs from single images.
With carefully designed illumination and identity preserving loss, FaceDNeRF offers users unparalleled control over the editing process.
arXiv Detail & Related papers (2023-06-01T15:14:39Z) - Removing Objects From Neural Radiance Fields [60.067117643543824]
We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence.
Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask.
We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner.
arXiv Detail & Related papers (2022-12-22T18:51:06Z) - NeRF-Loc: Transformer-Based Object Localization Within Neural Radiance
Fields [62.89785701659139]
We propose a transformer-based framework, NeRF-Loc, to extract 3D bounding boxes of objects in NeRF scenes.
NeRF-Loc takes a pre-trained NeRF model and camera view as input and produces labeled, oriented 3D bounding boxes of objects as output.
arXiv Detail & Related papers (2022-09-24T18:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.