NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors
- URL: http://arxiv.org/abs/2206.04901v1
- Date: Fri, 10 Jun 2022 06:54:22 GMT
- Title: NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors
- Authors: Hao-Kang Liu, I-Chao Shen, Bing-Yu Chen
- Abstract summary: We introduce the first framework that enables users to remove unwanted objects or undesired regions in a 3D scene represented by a pre-trained NeRF.
We show it obtained visual plausible and structurally consistent results across multiple views using shorter time and less user manual efforts.
- Score: 14.911856302172996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though Neural Radiance Field (NeRF) demonstrates compelling novel view
synthesis results, it is still unintuitive to edit a pre-trained NeRF because
the neural network's parameters and the scene geometry/appearance are often not
explicitly associated. In this paper, we introduce the first framework that
enables users to remove unwanted objects or retouch undesired regions in a 3D
scene represented by a pre-trained NeRF without any category-specific data and
training. The user first draws a free-form mask to specify a region containing
unwanted objects over a rendered view from the pre-trained NeRF. Our framework
first transfers the user-provided mask to other rendered views and estimates
guiding color and depth images within these transferred masked regions. Next,
we formulate an optimization problem that jointly inpaints the image content in
all masked regions across multiple views by updating the NeRF model's
parameters. We demonstrate our framework on diverse scenes and show it obtained
visual plausible and structurally consistent results across multiple views
using shorter time and less user manual efforts.
Related papers
- NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - Inpaint4DNeRF: Promptable Spatio-Temporal NeRF Inpainting with
Generative Diffusion Models [59.96172701917538]
Current Neural Radiance Fields (NeRF) can generate photorealistic novel views.
This paper proposes Inpaint4DNeRF to capitalize on state-of-the-art stable diffusion models.
arXiv Detail & Related papers (2023-12-30T11:26:55Z) - InNeRF360: Text-Guided 3D-Consistent Object Inpainting on 360-degree Neural Radiance Fields [26.20280877227749]
InNeRF360 is an automatic system that removes text-specified objects from 360-degree Neural Radiance Fields (NeRF)
We apply depth-space warping to enforce consistency across multiview text-encoded segmentations.
We refine the inpainted NeRF model using perceptual priors and 3D diffusion-based geometric priors to ensure visual plausibility.
arXiv Detail & Related papers (2023-05-24T12:22:23Z) - NeRFuser: Large-Scale Scene Representation by NeRF Fusion [35.749208740102546]
A practical benefit of implicit visual representations like Neural Radiance Fields (NeRFs) is their memory efficiency.
We propose NeRFuser, a novel architecture for NeRF registration and blending that assumes only access to pre-generated NeRFs.
arXiv Detail & Related papers (2023-05-22T17:59:05Z) - Reference-guided Controllable Inpainting of Neural Radiance Fields [26.296017756560467]
We focus on inpainting regions in a view-consistent and controllable manner.
We use monocular depth estimators to back-project the inpainted view to the correct 3D positions.
For non-reference disoccluded regions, we devise a method based on image inpainters to guide both the geometry and appearance.
arXiv Detail & Related papers (2023-04-19T14:11:21Z) - Removing Objects From Neural Radiance Fields [60.067117643543824]
We propose a framework to remove objects from a NeRF representation created from an RGB-D sequence.
Our NeRF inpainting method leverages recent work in 2D image inpainting and is guided by a user-provided mask.
We show that our method for NeRF editing is effective for synthesizing plausible inpaintings in a multi-view coherent manner.
arXiv Detail & Related papers (2022-12-22T18:51:06Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.