InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes
- URL: http://arxiv.org/abs/2401.05335v1
- Date: Wed, 10 Jan 2024 18:59:53 GMT
- Title: InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes
- Authors: Mohamad Shahbazi, Liesbeth Claessens, Michael Niemeyer, Edo Collins,
Alessio Tonioni, Luc Van Gool, Federico Tombari
- Abstract summary: We introduce InseRF, a novel method for generative object insertion in the NeRF reconstructions of 3D scenes.
Based on a user-provided textual description and a 2D bounding box in a reference viewpoint, InseRF generates new objects in 3D scenes.
- Score: 86.26588382747184
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce InseRF, a novel method for generative object insertion in the
NeRF reconstructions of 3D scenes. Based on a user-provided textual description
and a 2D bounding box in a reference viewpoint, InseRF generates new objects in
3D scenes. Recently, methods for 3D scene editing have been profoundly
transformed, owing to the use of strong priors of text-to-image diffusion
models in 3D generative modeling. Existing methods are mostly effective in
editing 3D scenes via style and appearance changes or removing existing
objects. Generating new objects, however, remains a challenge for such methods,
which we address in this study. Specifically, we propose grounding the 3D
object insertion to a 2D object insertion in a reference view of the scene. The
2D edit is then lifted to 3D using a single-view object reconstruction method.
The reconstructed object is then inserted into the scene, guided by the priors
of monocular depth estimation methods. We evaluate our method on various 3D
scenes and provide an in-depth analysis of the proposed components. Our
experiments with generative insertion of objects in several 3D scenes indicate
the effectiveness of our method compared to the existing methods. InseRF is
capable of controllable and 3D-consistent object insertion without requiring
explicit 3D information as input. Please visit our project page at
https://mohamad-shahbazi.github.io/inserf.
Related papers
- Lay-A-Scene: Personalized 3D Object Arrangement Using Text-to-Image Priors [43.19801974707858]
Current 3D generation techniques struggle with generating scenes with multiple high-resolution objects.
Here we introduce Lay-A-Scene, which solves the task of Open-set 3D Object Arrangement.
We show how to infer the 3D poses and arrangement of objects from a 2D generated image by finding a consistent projection of objects onto the 2D scene.
arXiv Detail & Related papers (2024-06-02T09:48:19Z) - Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting [75.7154104065613]
We introduce a novel depth completion model, trained via teacher distillation and self-training to learn the 3D fusion process.
We also introduce a new benchmarking scheme for scene generation methods that is based on ground truth geometry.
arXiv Detail & Related papers (2024-04-30T17:59:40Z) - Disentangled 3D Scene Generation with Layout Learning [109.03233745767062]
We introduce a method to generate 3D scenes that are disentangled into their component objects.
Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene.
We show that despite its simplicity, our approach successfully generates 3D scenes into individual objects.
arXiv Detail & Related papers (2024-02-26T18:54:15Z) - SceneWiz3D: Towards Text-guided 3D Scene Composition [134.71933134180782]
Existing approaches either leverage large text-to-image models to optimize a 3D representation or train 3D generators on object-centric datasets.
We introduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes from text.
arXiv Detail & Related papers (2023-12-13T18:59:30Z) - Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion [18.67196713834323]
This paper presents a novel approach to inpainting 3D regions of a scene, given masked multi-view images, by distilling a 2D diffusion model into a learned 3D scene representation (e.g. a NeRF)
We show that this 2D diffusion model can still serve as a generative prior in a 3D multi-view reconstruction problem where we optimize a NeRF using a combination of score distillation sampling and NeRF reconstruction losses.
Because our method can generate content to fill any 3D masked region, we additionally demonstrate 3D object completion, 3D object replacement, and 3D scene completion
arXiv Detail & Related papers (2023-12-06T19:30:04Z) - Object2Scene: Putting Objects in Context for Open-Vocabulary 3D
Detection [24.871590175483096]
Point cloud-based open-vocabulary 3D object detection aims to detect 3D categories that do not have ground-truth annotations in the training set.
Previous approaches leverage large-scale richly-annotated image datasets as a bridge between 3D and category semantics.
We propose Object2Scene, the first approach that leverages large-scale large-vocabulary 3D object datasets to augment existing 3D scene datasets for open-vocabulary 3D object detection.
arXiv Detail & Related papers (2023-09-18T03:31:53Z) - Learning 3D Scene Priors with 2D Supervision [37.79852635415233]
We propose a new method to learn 3D scene priors of layout and shape without requiring any 3D ground truth.
Our method represents a 3D scene as a latent vector, from which we can progressively decode to a sequence of objects characterized by their class categories.
Experiments on 3D-FRONT and ScanNet show that our method outperforms state of the art in single-view reconstruction.
arXiv Detail & Related papers (2022-11-25T15:03:32Z) - ONeRF: Unsupervised 3D Object Segmentation from Multiple Views [59.445957699136564]
ONeRF is a method that automatically segments and reconstructs object instances in 3D from multi-view RGB images without any additional manual annotations.
The segmented 3D objects are represented using separate Neural Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view rendering.
arXiv Detail & Related papers (2022-11-22T06:19:37Z) - Towards High-Fidelity Single-view Holistic Reconstruction of Indoor
Scenes [50.317223783035075]
We present a new framework to reconstruct holistic 3D indoor scenes from single-view images.
We propose an instance-aligned implicit function (InstPIFu) for detailed object reconstruction.
Our code and model will be made publicly available.
arXiv Detail & Related papers (2022-07-18T14:54:57Z) - Style Agnostic 3D Reconstruction via Adversarial Style Transfer [23.304453155586312]
Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision.
We propose an approach that enables a differentiable-based learning of 3D objects from images with backgrounds.
arXiv Detail & Related papers (2021-10-20T21:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.