RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent
Geometry and Texture
- URL: http://arxiv.org/abs/2305.11337v1
- Date: Thu, 18 May 2023 22:57:57 GMT
- Title: RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent
Geometry and Texture
- Authors: Liangchen Song, Liangliang Cao, Hongyu Xu, Kai Kang, Feng Tang,
Junsong Yuan, Yang Zhao
- Abstract summary: We propose "RoomDreamer", which leverages powerful natural language to synthesize a new room with a different style.
Our work addresses the challenge of synthesizing both geometry and texture aligned to the input scene structure and prompt simultaneously.
To validate the proposed method, real indoor scenes scanned with smartphones are used for extensive experiments.
- Score: 80.0643976406225
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The techniques for 3D indoor scene capturing are widely used, but the meshes
produced leave much to be desired. In this paper, we propose "RoomDreamer",
which leverages powerful natural language to synthesize a new room with a
different style. Unlike existing image synthesis methods, our work addresses
the challenge of synthesizing both geometry and texture aligned to the input
scene structure and prompt simultaneously. The key insight is that a scene
should be treated as a whole, taking into account both scene texture and
geometry. The proposed framework consists of two significant components:
Geometry Guided Diffusion and Mesh Optimization. Geometry Guided Diffusion for
3D Scene guarantees the consistency of the scene style by applying the 2D prior
to the entire scene simultaneously. Mesh Optimization improves the geometry and
texture jointly and eliminates the artifacts in the scanned scene. To validate
the proposed method, real indoor scenes scanned with smartphones are used for
extensive experiments, through which the effectiveness of our method is
demonstrated.
Related papers
- SceneCraft: Layout-Guided 3D Scene Generation [29.713491313796084]
SceneCraft is a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences.
Our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.
arXiv Detail & Related papers (2024-10-11T17:59:58Z) - RoomTex: Texturing Compositional Indoor Scenes via Iterative Inpainting [34.827355403635536]
We propose a 3D scene framework referred to as RoomTex.
RoomTex generates high-fidelity and style-consistent textures for un-consistent scene meshes.
We propose to maintain superior alignment between RGB and edge detection methods.
arXiv Detail & Related papers (2024-06-04T16:27:09Z) - Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting [75.7154104065613]
We introduce a novel depth completion model, trained via teacher distillation and self-training to learn the 3D fusion process.
We also introduce a new benchmarking scheme for scene generation methods that is based on ground truth geometry.
arXiv Detail & Related papers (2024-04-30T17:59:40Z) - SceneWiz3D: Towards Text-guided 3D Scene Composition [134.71933134180782]
Existing approaches either leverage large text-to-image models to optimize a 3D representation or train 3D generators on object-centric datasets.
We introduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes from text.
arXiv Detail & Related papers (2023-12-13T18:59:30Z) - DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture
Propagation [31.353409149640605]
In this paper, we propose a novel framework to generate 3D textures for immersive VR experiences.
To survive, we separate texture cues in confidential regions and learn to network textures in real-world environments.
arXiv Detail & Related papers (2023-10-19T19:29:23Z) - SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation
with Fine-Grained Geometry [92.24144643757963]
3D indoor scenes are widely used in computer graphics, with applications ranging from interior design to gaming to virtual and augmented reality.
High-quality 3D indoor scenes are highly demanded while it requires expertise and is time-consuming to design high-quality 3D indoor scenes manually.
We propose SCENEHGN, a hierarchical graph network for 3D indoor scenes that takes into account the full hierarchy from the room level to the object level, then finally to the object part level.
For the first time, our method is able to directly generate plausible 3D room content, including furniture objects with fine-grained geometry, and
arXiv Detail & Related papers (2023-02-16T15:31:59Z) - SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections [49.802462165826554]
We present SceneDreamer, an unconditional generative model for unbounded 3D scenes.
Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations.
arXiv Detail & Related papers (2023-02-02T18:59:16Z) - CompNVS: Novel View Synthesis with Scene Completion [83.19663671794596]
We propose a generative pipeline performing on a sparse grid-based neural scene representation to complete unobserved scene parts.
We process encoded image features in 3D space with a geometry completion network and a subsequent texture inpainting network to extrapolate the missing area.
Photorealistic image sequences can be finally obtained via consistency-relevant differentiable rendering.
arXiv Detail & Related papers (2022-07-23T09:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.