Removing Objects From Neural Radiance Fields
- URL: http://arxiv.org/abs/2212.11966v1
- Date: Thu, 22 Dec 2022 18:51:06 GMT
- Title: Removing Objects From Neural Radiance Fields
- Authors: Silvan Weder, Guillermo Garcia-Hernando, Aron Monszpart, Marc
Pollefeys, Gabriel Brostow, Michael Firman, Sara Vicente
- Abstract summary: We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence.
Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask.
We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner.
- Score: 60.067117643543824
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Neural Radiance Fields (NeRFs) are emerging as a ubiquitous scene
representation that allows for novel view synthesis. Increasingly, NeRFs will
be shareable with other people. Before sharing a NeRF, though, it might be
desirable to remove personal information or unsightly objects. Such removal is
not easily achieved with the current NeRF editing frameworks. We propose a
framework to remove objects from a NeRF representation created from an RGB-D
sequence. Our NeRF inpainting method leverages recent work in 2D image
inpainting and is guided by a user-provided mask. Our algorithm is underpinned
by a confidence based view selection procedure. It chooses which of the
individual 2D inpainted images to use in the creation of the NeRF, so that the
resulting inpainted NeRF is 3D consistent. We show that our method for NeRF
editing is effective for synthesizing plausible inpaintings in a multi-view
coherent manner. We validate our approach using a new and still-challenging
dataset for the task of NeRF inpainting.
Related papers
- GenN2N: Generative NeRF2NeRF Translation [53.20986183316661]
GenN2N is a unified NeRF-to-NeRF translation framework for various NeRF translation tasks.
It employs a plug-and-play image-to-image translator to perform editing in the 2D domain and lifting 2D edits into the 3D NeRF space.
arXiv Detail & Related papers (2024-04-03T14:56:06Z) - Fast Sparse View Guided NeRF Update for Object Reconfigurations [42.947608325321475]
We develop the first update method for NeRFs to physical changes.
Our method takes only sparse new images as extra inputs and update the pre-trained NeRF in around 1 to 2 minutes.
Our core idea is the use of a second helper NeRF to learn the local geometry and appearance changes.
arXiv Detail & Related papers (2024-03-16T22:00:16Z) - Inpaint4DNeRF: Promptable Spatio-Temporal NeRF Inpainting with
Generative Diffusion Models [59.96172701917538]
Current Neural Radiance Fields (NeRF) can generate photorealistic novel views.
This paper proposes Inpaint4DNeRF to capitalize on state-of-the-art stable diffusion models.
arXiv Detail & Related papers (2023-12-30T11:26:55Z) - Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields [16.375242125946965]
We propose a novel NeRF-based model, Blending-NeRF, which consists of two NeRF networks: pretrained NeRF and editable NeRF.
We introduce new blending operations that allow Blending-NeRF to properly edit target regions which are localized by text.
Our experiments demonstrate that Blending-NeRF produces naturally and locally edited 3D objects from various text prompts.
arXiv Detail & Related papers (2023-08-23T07:46:44Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models [36.236190350126826]
We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
arXiv Detail & Related papers (2023-06-09T04:49:31Z) - PaletteNeRF: Palette-based Color Editing for NeRFs [16.49512200561126]
We propose a simple but effective extension of vanilla NeRF, named PaletteNeRF, to enable efficient color editing on NeRF-represented scenes.
Our method achieves efficient, view-consistent, and artifact-free color editing on a wide range of NeRF-represented scenes.
arXiv Detail & Related papers (2022-12-25T08:01:03Z) - NeRF-Loc: Transformer-Based Object Localization Within Neural Radiance
Fields [62.89785701659139]
We propose a transformer-based framework, NeRF-Loc, to extract 3D bounding boxes of objects in NeRF scenes.
NeRF-Loc takes a pre-trained NeRF model and camera view as input and produces labeled, oriented 3D bounding boxes of objects as output.
arXiv Detail & Related papers (2022-09-24T18:34:22Z) - NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors [14.911856302172996]
We introduce the first framework that enables users to remove unwanted objects or undesired regions in a 3D scene represented by a pre-trained NeRF.
We show it obtained visual plausible and structurally consistent results across multiple views using shorter time and less user manual efforts.
arXiv Detail & Related papers (2022-06-10T06:54:22Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.