InNeRF360: Text-Guided 3D-Consistent Object Inpainting on 360-degree Neural Radiance Fields
- URL: http://arxiv.org/abs/2305.15094v2
- Date: Tue, 26 Mar 2024 13:57:26 GMT
- Title: InNeRF360: Text-Guided 3D-Consistent Object Inpainting on 360-degree Neural Radiance Fields
- Authors: Dongqing Wang, Tong Zhang, Alaa Abboud, Sabine Süsstrunk,
- Abstract summary: InNeRF360 is an automatic system that removes text-specified objects from 360-degree Neural Radiance Fields (NeRF)
We apply depth-space warping to enforce consistency across multiview text-encoded segmentations.
We refine the inpainted NeRF model using perceptual priors and 3D diffusion-based geometric priors to ensure visual plausibility.
- Score: 26.20280877227749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose InNeRF360, an automatic system that accurately removes text-specified objects from 360-degree Neural Radiance Fields (NeRF). The challenge is to effectively remove objects while inpainting perceptually consistent content for the missing regions, which is particularly demanding for existing NeRF models due to their implicit volumetric representation. Moreover, unbounded scenes are more prone to floater artifacts in the inpainted region than frontal-facing scenes, as the change of object appearance and background across views is more sensitive to inaccurate segmentations and inconsistent inpainting. With a trained NeRF and a text description, our method efficiently removes specified objects and inpaints visually consistent content without artifacts. We apply depth-space warping to enforce consistency across multiview text-encoded segmentations, and then refine the inpainted NeRF model using perceptual priors and 3D diffusion-based geometric priors to ensure visual plausibility. Through extensive experiments in segmentation and inpainting on 360-degree and frontal-facing NeRFs, we show that our approach is effective and enhances NeRF's editability. Project page: https://ivrl.github.io/InNeRF360.
Related papers
- Sp2360: Sparse-view 360 Scene Reconstruction using Cascaded 2D Diffusion Priors [51.36238367193988]
We tackle sparse-view reconstruction of a 360 3D scene using priors from latent diffusion models (LDM)
We present SparseSplat360, a method that employs a cascade of in-painting and artifact removal models to fill in missing details and clean novel views.
Our method generates entire 360 scenes from as few as 9 input views, with a high degree of foreground and background detail.
arXiv Detail & Related papers (2024-05-26T11:01:39Z) - Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models [36.236190350126826]
We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
arXiv Detail & Related papers (2023-06-09T04:49:31Z) - OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation
with Neural Radiance Fields [53.32527220134249]
The emergence of Neural Radiance Fields (NeRF) for novel view synthesis has increased interest in 3D scene editing.
Current methods face challenges such as time-consuming object labeling, limited capability to remove specific targets, and compromised rendering quality after removal.
This paper proposes a novel object-removing pipeline, named OR-NeRF, that can remove objects from 3D scenes with user-given points or text prompts on a single view.
arXiv Detail & Related papers (2023-05-17T18:18:05Z) - Reference-guided Controllable Inpainting of Neural Radiance Fields [26.296017756560467]
We focus on inpainting regions in a view-consistent and controllable manner.
We use monocular depth estimators to back-project the inpainted view to the correct 3D positions.
For non-reference disoccluded regions, we devise a method based on image inpainters to guide both the geometry and appearance.
arXiv Detail & Related papers (2023-04-19T14:11:21Z) - NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects [63.04781030984006]
Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of rendering photo-realistic novel view images from a monocular RGB video of a dynamic scene.
We address the limitation by reformulating the neural radiance field function to be conditioned on surface position and orientation in the observation space.
We evaluate our model based on the novel view synthesis quality with a self-collected dataset of different moving specular objects in realistic environments.
arXiv Detail & Related papers (2023-03-25T11:03:53Z) - Pre-NeRF 360: Enriching Unbounded Appearances for Neural Radiance Fields [8.634008996263649]
We propose a new framework to boost the performance of NeRF-based architectures.
Our solution overcomes several obstacles that plagued earlier versions of NeRF.
We introduce an updated version of the Nutrition5k dataset, known as the N5k360 dataset.
arXiv Detail & Related papers (2023-03-21T23:29:38Z) - Removing Objects From Neural Radiance Fields [60.067117643543824]
We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence.
Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask.
We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner.
arXiv Detail & Related papers (2022-12-22T18:51:06Z) - SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural
Radiance Fields [26.296017756560467]
In 3D, solutions must be consistent across multiple views and geometrically valid.
We propose a novel 3D inpainting method that addresses these challenges.
We first demonstrate the superiority of our approach on multiview segmentation, comparing to NeRFbased methods and 2D segmentation approaches.
arXiv Detail & Related papers (2022-11-22T13:14:50Z) - NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors [14.911856302172996]
We introduce the first framework that enables users to remove unwanted objects or undesired regions in a 3D scene represented by a pre-trained NeRF.
We show it obtained visual plausible and structurally consistent results across multiple views using shorter time and less user manual efforts.
arXiv Detail & Related papers (2022-06-10T06:54:22Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.