ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural
Radiance Fields
- URL: http://arxiv.org/abs/2401.17895v1
- Date: Wed, 31 Jan 2024 15:02:26 GMT
- Title: ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural
Radiance Fields
- Authors: Edward Bartrum and Thu Nguyen-Phuoc and Chris Xie and Zhengqin Li and
Numair Khan and Armen Avetisyan and Douglas Lanman and Lei Xiao
- Abstract summary: We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene editing method.
Given multi-view images of a scene, a text prompt describing the object to replace, and a text prompt describing the new object, our Erase-and-Replace approach can effectively swap objects in the scene with newly generated content.
- Score: 13.425973473159406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene
editing method that enables the replacement of specific objects within a scene.
Given multi-view images of a scene, a text prompt describing the object to
replace, and a text prompt describing the new object, our Erase-and-Replace
approach can effectively swap objects in the scene with newly generated content
while maintaining 3D consistency across multiple viewpoints. We demonstrate the
versatility of ReplaceAnything3D by applying it to various realistic 3D scenes,
showcasing results of modified foreground objects that are well-integrated with
the rest of the scene without affecting its overall integrity.
Related papers
- SIn-NeRF2NeRF: Editing 3D Scenes with Instructions through Segmentation and Inpainting [0.3119157043062931]
Instruct-NeRF2NeRF (in2n) is a promising method that enables editing of 3D scenes composed of Neural Radiance Field (NeRF) using text prompts.
In this project, we enable geometrical changes of objects within the 3D scene by selectively editing the object after separating it from the scene.
arXiv Detail & Related papers (2024-08-23T02:20:42Z) - Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts [76.73043724587679]
We propose a dialogue-based 3D scene editing approach, termed CE3D.
Hash-Atlas represents 3D scene views, which transfers the editing of 3D scenes onto 2D atlas images.
Results demonstrate that CE3D effectively integrates multiple visual models to achieve diverse editing visual effects.
arXiv Detail & Related papers (2024-07-09T13:24:42Z) - Disentangled 3D Scene Generation with Layout Learning [109.03233745767062]
We introduce a method to generate 3D scenes that are disentangled into their component objects.
Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene.
We show that despite its simplicity, our approach successfully generates 3D scenes into individual objects.
arXiv Detail & Related papers (2024-02-26T18:54:15Z) - InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes [86.26588382747184]
We introduce InseRF, a novel method for generative object insertion in the NeRF reconstructions of 3D scenes.
Based on a user-provided textual description and a 2D bounding box in a reference viewpoint, InseRF generates new objects in 3D scenes.
arXiv Detail & Related papers (2024-01-10T18:59:53Z) - SceneWiz3D: Towards Text-guided 3D Scene Composition [134.71933134180782]
Existing approaches either leverage large text-to-image models to optimize a 3D representation or train 3D generators on object-centric datasets.
We introduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes from text.
arXiv Detail & Related papers (2023-12-13T18:59:30Z) - TeMO: Towards Text-Driven 3D Stylization for Multi-Object Meshes [67.5351491691866]
We present a novel framework, dubbed TeMO, to parse multi-object 3D scenes and edit their styles.
Our method can synthesize high-quality stylized content and outperform the existing methods over a wide range of multi-object 3D meshes.
arXiv Detail & Related papers (2023-12-07T12:10:05Z) - Blended-NeRF: Zero-Shot Object Generation and Blending in Existing
Neural Radiance Fields [26.85599376826124]
We present Blended-NeRF, a framework for editing a specific region of interest in an existing NeRF scene.
We allow local editing by localizing a 3D ROI box in the input scene, and blend the content synthesized inside the ROI with the existing scene.
We show our framework for several 3D editing applications, including adding new objects to a scene, removing/altering existing objects, and texture conversion.
arXiv Detail & Related papers (2023-06-22T09:34:55Z) - ONeRF: Unsupervised 3D Object Segmentation from Multiple Views [59.445957699136564]
ONeRF is a method that automatically segments and reconstructs object instances in 3D from multi-view RGB images without any additional manual annotations.
The segmented 3D objects are represented using separate Neural Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view rendering.
arXiv Detail & Related papers (2022-11-22T06:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.