Graph2Plan: Learning Floorplan Generation from Layout Graphs
- URL: http://arxiv.org/abs/2004.13204v1
- Date: Mon, 27 Apr 2020 23:17:36 GMT
- Title: Graph2Plan: Learning Floorplan Generation from Layout Graphs
- Authors: Ruizhen Hu, Zeyu Huang, Yuhan Tang, Oliver van Kaick, Hao Zhang, Hui
Huang
- Abstract summary: We introduce a learning framework for automated floorplan generation using deep neural networks and user-in-the-loop designs.
The core component of our learning framework is a deep neural network, Graph2Plan, which converts a layout graph, along with a building boundary, into a floorplan.
- Score: 22.96011587272246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a learning framework for automated floorplan generation which
combines generative modeling using deep neural networks and user-in-the-loop
designs to enable human users to provide sparse design constraints. Such
constraints are represented by a layout graph. The core component of our
learning framework is a deep neural network, Graph2Plan, which converts a
layout graph, along with a building boundary, into a floorplan that fulfills
both the layout and boundary constraints. Given an input building boundary, we
allow a user to specify room counts and other layout constraints, which are
used to retrieve a set of floorplans, with their associated layout graphs, from
a database. For each retrieved layout graph, along with the input boundary,
Graph2Plan first generates a corresponding raster floorplan image, and then a
refined set of boxes representing the rooms. Graph2Plan is trained on RPLAN, a
large-scale dataset consisting of 80K annotated floorplans. The network is
mainly based on convolutional processing over both the layout graph, via a
graph neural network (GNN), and the input building boundary, as well as the
raster floorplan images, via conventional image convolution.
Related papers
- Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - DiagrammerGPT: Generating Open-Domain, Open-Platform Diagrams via LLM Planning [62.51232333352754]
Text-to-image (T2I) generation has seen significant growth over the past few years.
Despite this, there has been little work on generating diagrams with T2I models.
We present DiagrammerGPT, a novel two-stage text-to-diagram generation framework.
We show that our framework produces more accurate diagrams, outperforming existing T2I models.
arXiv Detail & Related papers (2023-10-18T17:37:10Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Graph-based Global Robot Localization Informing Situational Graphs with
Architectural Graphs [8.514420632209811]
We develop a method for converting the plan of a building into what we denote as an architectural graph (A-Graph)
When the robot starts moving in an environment, we assume it has no knowledge about it, and it estimates an online situational graph representation (S-Graph) of its surroundings.
We develop a novel graph-to-graph matching method, in order to relate the S-Graph estimated online from the robot sensors and the A-Graph extracted from the building plans.
arXiv Detail & Related papers (2023-03-03T16:48:38Z) - End-to-end Graph-constrained Vectorized Floorplan Generation with
Panoptic Refinement [16.103152098205566]
We aim to synthesize floorplans as sequences of 1-D vectors, which eases user interaction and design customization.
In the first stage, we encode the room connectivity graph input by users with a graphal network (GCN), then apply an autoregressive transformer network to generate an initial floorplan sequence.
To polish the initial design and generate more visually appealing floorplans, we further propose a novel panoptic refinement network(PRN) composed of a GCN and a transformer network.
arXiv Detail & Related papers (2022-07-27T03:19:20Z) - Room Classification on Floor Plan Graphs using Graph Neural Networks [0.0]
We present our approach to improve room classification task on floor plan maps of buildings by representing floor plans as undirected graphs.
Rooms in the floor plans are represented as nodes in the graph with edges representing their adjacency in the map.
Our results show that graph neural networks, specifically GraphSAGE and Topology Adaptive GCN were able to achieve accuracy of 80% and 81% respectively.
arXiv Detail & Related papers (2021-08-12T19:59:22Z) - Graph-Based Generative Representation Learning of Semantically and
Behaviorally Augmented Floorplans [12.488287536032747]
We present a floorplan embedding technique that uses an attributed graph to represent the geometric information as well as design semantics and behavioral features of the inhabitants as node and edge attributes.
A Long Short-Term Memory (LSTM) Variational Autoencoder (VAE) architecture is proposed and trained to embed attributed graphs as vectors in a continuous space.
A user study is conducted to evaluate the coupling of similar floorplans retrieved from the embedding space with respect to a given input.
arXiv Detail & Related papers (2020-12-08T20:51:56Z) - Plan2Vec: Unsupervised Representation Learning by Latent Plans [106.37274654231659]
We introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.
Plan2vec constructs a weighted graph on an image dataset using near-neighbor distances, and then extrapolates this local metric to a global embedding by distilling path-integral over planned path.
We demonstrate the effectiveness of plan2vec on one simulated and two challenging real-world image datasets.
arXiv Detail & Related papers (2020-05-07T17:52:23Z) - House-GAN: Relational Generative Adversarial Networks for
Graph-constrained House Layout Generation [59.86153321871127]
The main idea is to encode the constraint into the graph structure of its relational networks.
We have demonstrated the proposed architecture for a new house layout generation problem.
arXiv Detail & Related papers (2020-03-16T03:16:12Z) - Hallucinative Topological Memory for Zero-Shot Visual Planning [86.20780756832502]
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline.
Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans.
Here, we propose a simple VP method that plans directly in image space and displays competitive performance.
arXiv Detail & Related papers (2020-02-27T18:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.