Indoor Scene Generation from a Collection of Semantic-Segmented Depth
Images
- URL: http://arxiv.org/abs/2108.09022v1
- Date: Fri, 20 Aug 2021 06:22:49 GMT
- Title: Indoor Scene Generation from a Collection of Semantic-Segmented Depth
Images
- Authors: Ming-Jia Yang and Yu-Xiao Guo and Bin Zhou and Xin Tong
- Abstract summary: We present a method for creating 3D indoor scenes with a generative model learned from semantic-segmented depth images.
Given a room with a specified size, our method automatically generates 3D objects in a room from a randomly sampled latent code.
Compared to existing methods, our method not only efficiently reduces the workload of modeling and acquiring 3D scenes for training, but also produces better object shapes.
- Score: 18.24156991697044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method for creating 3D indoor scenes with a generative model
learned from a collection of semantic-segmented depth images captured from
different unknown scenes. Given a room with a specified size, our method
automatically generates 3D objects in a room from a randomly sampled latent
code. Different from existing methods that represent an indoor scene with the
type, location, and other properties of objects in the room and learn the scene
layout from a collection of complete 3D indoor scenes, our method models each
indoor scene as a 3D semantic scene volume and learns a volumetric generative
adversarial network (GAN) from a collection of 2.5D partial observations of 3D
scenes. To this end, we apply a differentiable projection layer to project the
generated 3D semantic scene volumes into semantic-segmented depth images and
design a new multiple-view discriminator for learning the complete 3D scene
volume from 2.5D semantic-segmented depth images. Compared to existing methods,
our method not only efficiently reduces the workload of modeling and acquiring
3D scenes for training, but also produces better object shapes and their
detailed layouts in the scene. We evaluate our method with different indoor
scene datasets and demonstrate the advantages of our method. We also extend our
method for generating 3D indoor scenes from semantic-segmented depth images
inferred from RGB images of real scenes.
Related papers
- SceneCraft: Layout-Guided 3D Scene Generation [29.713491313796084]
SceneCraft is a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences.
Our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.
arXiv Detail & Related papers (2024-10-11T17:59:58Z) - Sketch2Scene: Automatic Generation of Interactive 3D Game Scenes from User's Casual Sketches [50.51643519253066]
3D Content Generation is at the heart of many computer graphics applications, including video gaming, film-making, virtual and augmented reality, etc.
This paper proposes a novel deep-learning based approach for automatically generating interactive and playable 3D game scenes.
arXiv Detail & Related papers (2024-08-08T16:27:37Z) - Disentangled 3D Scene Generation with Layout Learning [109.03233745767062]
We introduce a method to generate 3D scenes that are disentangled into their component objects.
Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene.
We show that despite its simplicity, our approach successfully generates 3D scenes into individual objects.
arXiv Detail & Related papers (2024-02-26T18:54:15Z) - InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes [86.26588382747184]
We introduce InseRF, a novel method for generative object insertion in the NeRF reconstructions of 3D scenes.
Based on a user-provided textual description and a 2D bounding box in a reference viewpoint, InseRF generates new objects in 3D scenes.
arXiv Detail & Related papers (2024-01-10T18:59:53Z) - SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation
with Fine-Grained Geometry [92.24144643757963]
3D indoor scenes are widely used in computer graphics, with applications ranging from interior design to gaming to virtual and augmented reality.
High-quality 3D indoor scenes are highly demanded while it requires expertise and is time-consuming to design high-quality 3D indoor scenes manually.
We propose SCENEHGN, a hierarchical graph network for 3D indoor scenes that takes into account the full hierarchy from the room level to the object level, then finally to the object part level.
For the first time, our method is able to directly generate plausible 3D room content, including furniture objects with fine-grained geometry, and
arXiv Detail & Related papers (2023-02-16T15:31:59Z) - SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections [49.802462165826554]
We present SceneDreamer, an unconditional generative model for unbounded 3D scenes.
Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations.
arXiv Detail & Related papers (2023-02-02T18:59:16Z) - Towards High-Fidelity Single-view Holistic Reconstruction of Indoor
Scenes [50.317223783035075]
We present a new framework to reconstruct holistic 3D indoor scenes from single-view images.
We propose an instance-aligned implicit function (InstPIFu) for detailed object reconstruction.
Our code and model will be made publicly available.
arXiv Detail & Related papers (2022-07-18T14:54:57Z) - Indoor Scene Recognition in 3D [26.974703983293093]
Existing approaches attempt to classify the scene based on 2D images or 2.5D range images.
Here, we study scene recognition from 3D point cloud (or voxel) data.
We show that it greatly outperforms methods based on 2D birds-eye views.
arXiv Detail & Related papers (2020-02-28T15:47:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.