Reference-guided Controllable Inpainting of Neural Radiance Fields
- URL: http://arxiv.org/abs/2304.09677v2
- Date: Thu, 20 Apr 2023 15:19:12 GMT
- Title: Reference-guided Controllable Inpainting of Neural Radiance Fields
- Authors: Ashkan Mirzaei, Tristan Aumentado-Armstrong, Marcus A. Brubaker,
Jonathan Kelly, Alex Levinshtein, Konstantinos G. Derpanis, Igor
Gilitschenski
- Abstract summary: We focus on inpainting regions in a view-consistent and controllable manner.
We use monocular depth estimators to back-project the inpainted view to the correct 3D positions.
For non-reference disoccluded regions, we devise a method based on image inpainters to guide both the geometry and appearance.
- Score: 26.296017756560467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The popularity of Neural Radiance Fields (NeRFs) for view synthesis has led
to a desire for NeRF editing tools. Here, we focus on inpainting regions in a
view-consistent and controllable manner. In addition to the typical NeRF inputs
and masks delineating the unwanted region in each view, we require only a
single inpainted view of the scene, i.e., a reference view. We use monocular
depth estimators to back-project the inpainted view to the correct 3D
positions. Then, via a novel rendering technique, a bilateral solver can
construct view-dependent effects in non-reference views, making the inpainted
region appear consistent from any view. For non-reference disoccluded regions,
which cannot be supervised by the single reference view, we devise a method
based on image inpainters to guide both the geometry and appearance. Our
approach shows superior performance to NeRF inpainting baselines, with the
additional advantage that a user can control the generated scene via a single
inpainted image. Project page: https://ashmrz.github.io/reference-guided-3d
Related papers
- NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - RefFusion: Reference Adapted Diffusion Models for 3D Scene Inpainting [63.567363455092234]
RefFusion is a novel 3D inpainting method based on a multi-scale personalization of an image inpainting diffusion model to the given reference view.
Our framework achieves state-of-the-art results for object removal while maintaining high controllability.
arXiv Detail & Related papers (2024-04-16T17:50:02Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - InNeRF360: Text-Guided 3D-Consistent Object Inpainting on 360-degree Neural Radiance Fields [26.20280877227749]
InNeRF360 is an automatic system that removes text-specified objects from 360-degree Neural Radiance Fields (NeRF)
We apply depth-space warping to enforce consistency across multiview text-encoded segmentations.
We refine the inpainted NeRF model using perceptual priors and 3D diffusion-based geometric priors to ensure visual plausibility.
arXiv Detail & Related papers (2023-05-24T12:22:23Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural
Radiance Fields [26.296017756560467]
In 3D, solutions must be consistent across multiple views and geometrically valid.
We propose a novel 3D inpainting method that addresses these challenges.
We first demonstrate the superiority of our approach on multiview segmentation, comparing to NeRFbased methods and 2D segmentation approaches.
arXiv Detail & Related papers (2022-11-22T13:14:50Z) - S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a
Single Viewpoint [22.42916940712357]
Our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.
Our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images.
It supports applications like novel-view synthesis and relighting.
arXiv Detail & Related papers (2022-10-17T11:01:52Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors [14.911856302172996]
We introduce the first framework that enables users to remove unwanted objects or undesired regions in a 3D scene represented by a pre-trained NeRF.
We show it obtained visual plausible and structurally consistent results across multiple views using shorter time and less user manual efforts.
arXiv Detail & Related papers (2022-06-10T06:54:22Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.