SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation
with Fine-Grained Geometry
- URL: http://arxiv.org/abs/2302.10237v1
- Date: Thu, 16 Feb 2023 15:31:59 GMT
- Title: SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation
with Fine-Grained Geometry
- Authors: Lin Gao, Jia-Mu Sun, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Jie
Yang
- Abstract summary: 3D indoor scenes are widely used in computer graphics, with applications ranging from interior design to gaming to virtual and augmented reality.
High-quality 3D indoor scenes are highly demanded while it requires expertise and is time-consuming to design high-quality 3D indoor scenes manually.
We propose SCENEHGN, a hierarchical graph network for 3D indoor scenes that takes into account the full hierarchy from the room level to the object level, then finally to the object part level.
For the first time, our method is able to directly generate plausible 3D room content, including furniture objects with fine-grained geometry, and
- Score: 92.24144643757963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D indoor scenes are widely used in computer graphics, with applications
ranging from interior design to gaming to virtual and augmented reality. They
also contain rich information, including room layout, as well as furniture
type, geometry, and placement. High-quality 3D indoor scenes are highly
demanded while it requires expertise and is time-consuming to design
high-quality 3D indoor scenes manually. Existing research only addresses
partial problems: some works learn to generate room layout, and other works
focus on generating detailed structure and geometry of individual furniture
objects. However, these partial steps are related and should be addressed
together for optimal synthesis. We propose SCENEHGN, a hierarchical graph
network for 3D indoor scenes that takes into account the full hierarchy from
the room level to the object level, then finally to the object part level.
Therefore for the first time, our method is able to directly generate plausible
3D room content, including furniture objects with fine-grained geometry, and
their layout. To address the challenge, we introduce functional regions as
intermediate proxies between the room and object levels to make learning more
manageable. To ensure plausibility, our graph-based representation incorporates
both vertical edges connecting child nodes with parent nodes from different
levels, and horizontal edges encoding relationships between nodes at the same
level. Extensive experiments demonstrate that our method produces superior
generation results, even when comparing results of partial steps with
alternative methods that can only achieve these. We also demonstrate that our
method is effective for various applications such as part-level room editing,
room interpolation, and room generation by arbitrary room boundaries.
Related papers
- SceneCraft: Layout-Guided 3D Scene Generation [29.713491313796084]
SceneCraft is a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences.
Our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.
arXiv Detail & Related papers (2024-10-11T17:59:58Z) - Disentangled 3D Scene Generation with Layout Learning [109.03233745767062]
We introduce a method to generate 3D scenes that are disentangled into their component objects.
Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene.
We show that despite its simplicity, our approach successfully generates 3D scenes into individual objects.
arXiv Detail & Related papers (2024-02-26T18:54:15Z) - ControlRoom3D: Room Generation using Semantic Proxy Rooms [48.93419701713694]
We present ControlRoom3D, a novel method to generate high-quality room meshes.
Our approach is a user-defined 3D semantic proxy room that outlines a rough room layout.
When rendered to 2D, this 3D representation provides valuable geometric and semantic information to control powerful 2D models.
arXiv Detail & Related papers (2023-12-08T17:55:44Z) - Ctrl-Room: Controllable Text-to-3D Room Meshes Generation with Layout Constraints [35.073500525250346]
We present Ctrl-Room, which can generate convincing 3D rooms with designer-style layouts and high-fidelity textures from just a text prompt.
Ctrl-Room enables versatile interactive editing operations such as resizing or moving individual furniture items.
arXiv Detail & Related papers (2023-10-05T15:29:52Z) - Generating Visual Spatial Description via Holistic 3D Scene
Understanding [88.99773815159345]
Visual spatial description (VSD) aims to generate texts that describe the spatial relations of the given objects within images.
With an external 3D scene extractor, we obtain the 3D objects and scene features for input images.
We construct a target object-centered 3D spatial scene graph (Go3D-S2G), such that we model the spatial semantics of target objects within the holistic 3D scenes.
arXiv Detail & Related papers (2023-05-19T15:53:56Z) - RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent
Geometry and Texture [80.0643976406225]
We propose "RoomDreamer", which leverages powerful natural language to synthesize a new room with a different style.
Our work addresses the challenge of synthesizing both geometry and texture aligned to the input scene structure and prompt simultaneously.
To validate the proposed method, real indoor scenes scanned with smartphones are used for extensive experiments.
arXiv Detail & Related papers (2023-05-18T22:57:57Z) - Structured Graph Variational Autoencoders for Indoor Furniture layout
Generation [7.035614458419328]
We present a structured graph variational autoencoder for generating the layout of indoor 3D scenes.
The architecture consists of a graph encoder that maps the input graph to a structured latent space, and a graph decoder that generates a furniture graph.
Experiments on the 3D-FRONT dataset show that our method produces scenes that are diverse and are adapted to the room layout.
arXiv Detail & Related papers (2022-04-11T04:58:26Z) - 3D-Aware Indoor Scene Synthesis with Depth Priors [62.82867334012399]
Existing methods fail to model indoor scenes due to the large diversity of room layouts and the objects inside.
We argue that indoor scenes do not have a shared intrinsic structure, and hence only using 2D images cannot adequately guide the model with the 3D geometry.
arXiv Detail & Related papers (2022-02-17T09:54:29Z) - Indoor Scene Generation from a Collection of Semantic-Segmented Depth
Images [18.24156991697044]
We present a method for creating 3D indoor scenes with a generative model learned from semantic-segmented depth images.
Given a room with a specified size, our method automatically generates 3D objects in a room from a randomly sampled latent code.
Compared to existing methods, our method not only efficiently reduces the workload of modeling and acquiring 3D scenes for training, but also produces better object shapes.
arXiv Detail & Related papers (2021-08-20T06:22:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.