End-to-end Graph-constrained Vectorized Floorplan Generation with
Panoptic Refinement
- URL: http://arxiv.org/abs/2207.13268v1
- Date: Wed, 27 Jul 2022 03:19:20 GMT
- Title: End-to-end Graph-constrained Vectorized Floorplan Generation with
Panoptic Refinement
- Authors: Jiachen Liu, Yuan Xue, Jose Duarte, Krishnendra Shekhawat, Zihan Zhou,
Xiaolei Huang
- Abstract summary: We aim to synthesize floorplans as sequences of 1-D vectors, which eases user interaction and design customization.
In the first stage, we encode the room connectivity graph input by users with a graphal network (GCN), then apply an autoregressive transformer network to generate an initial floorplan sequence.
To polish the initial design and generate more visually appealing floorplans, we further propose a novel panoptic refinement network(PRN) composed of a GCN and a transformer network.
- Score: 16.103152098205566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automatic generation of floorplans given user inputs has great potential
in architectural design and has recently been explored in the computer vision
community. However, the majority of existing methods synthesize floorplans in
the format of rasterized images, which are difficult to edit or customize. In
this paper, we aim to synthesize floorplans as sequences of 1-D vectors, which
eases user interaction and design customization. To generate high fidelity
vectorized floorplans, we propose a novel two-stage framework, including a
draft stage and a multi-round refining stage. In the first stage, we encode the
room connectivity graph input by users with a graph convolutional network
(GCN), then apply an autoregressive transformer network to generate an initial
floorplan sequence. To polish the initial design and generate more visually
appealing floorplans, we further propose a novel panoptic refinement
network(PRN) composed of a GCN and a transformer network. The PRN takes the
initial generated sequence as input and refines the floorplan design while
encouraging the correct room connectivity with our proposed geometric loss. We
have conducted extensive experiments on a real-world floorplan dataset, and the
results show that our method achieves state-of-the-art performance under
different settings and evaluation metrics.
Related papers
- Split-and-Fit: Learning B-Reps via Structure-Aware Voronoi Partitioning [50.684254969269546]
We introduce a novel method for acquiring boundary representations (B-Reps) of 3D CAD models.
We apply a spatial partitioning to derive a single primitive within each partition.
We show that our network, coined NVD-Net for neural Voronoi diagrams, can effectively learn Voronoi partitions for CAD models from training data.
arXiv Detail & Related papers (2024-06-07T21:07:49Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Skip-Connected Neural Networks with Layout Graphs for Floor Plan
Auto-Generation [0.0]
This paper presents a novel approach using skip-connected neural networks integrated with layout graphs.
The skip-connected layers capture multi-scale floor plan information, and the encoder-decoder networks with GNN facilitate pixel-level probability-based generation.
arXiv Detail & Related papers (2023-09-25T05:20:57Z) - HouseDiffusion: Vector Floorplan Generation via a Diffusion Model with
Discrete and Continuous Denoising [26.2029195029127]
The paper presents a novel approach for vector-floorplan generation via a diffusion model.
We represent a floorplan as 1D polygonal loops, each of which corresponds to a room or a door.
The proposed approach makes significant improvements in all the metrics against the state-of-the-art with significant margins.
arXiv Detail & Related papers (2022-11-23T20:25:11Z) - FloorGenT: Generative Vector Graphic Model of Floor Plans for Robotics [5.71097144710995]
We show that by modelling floor plans as sequences of line segments seen from a particular point of view, recent advances in autoregressive sequence modelling can be leveraged to model and predict floor plans.
arXiv Detail & Related papers (2022-03-07T13:42:48Z) - Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model [58.17021225930069]
We explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA)
We propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly.
Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works.
arXiv Detail & Related papers (2021-05-31T16:20:03Z) - House-GAN++: Generative Adversarial Layout Refinement Networks [37.60108582423617]
Our architecture is an integration of a graph-constrained GAN and a conditional GAN, where a previously generated layout becomes the next input constraint.
A surprising discovery of our research is that a simple non-iterative training process, dubbed component-wise GT-conditioning, is effective in learning such a generator.
arXiv Detail & Related papers (2021-03-03T18:15:52Z) - Graph2Plan: Learning Floorplan Generation from Layout Graphs [22.96011587272246]
We introduce a learning framework for automated floorplan generation using deep neural networks and user-in-the-loop designs.
The core component of our learning framework is a deep neural network, Graph2Plan, which converts a layout graph, along with a building boundary, into a floorplan.
arXiv Detail & Related papers (2020-04-27T23:17:36Z) - Latent Space Roadmap for Visual Action Planning of Deformable and Rigid
Object Manipulation [74.88956115580388]
Planning is performed in a low-dimensional latent state space that embeds images.
Our framework consists of two main components: a Visual Foresight Module (VFM) that generates a visual plan as a sequence of images, and an Action Proposal Network (APN) that predicts the actions between them.
arXiv Detail & Related papers (2020-03-19T18:43:26Z) - House-GAN: Relational Generative Adversarial Networks for
Graph-constrained House Layout Generation [59.86153321871127]
The main idea is to encode the constraint into the graph structure of its relational networks.
We have demonstrated the proposed architecture for a new house layout generation problem.
arXiv Detail & Related papers (2020-03-16T03:16:12Z) - Hallucinative Topological Memory for Zero-Shot Visual Planning [86.20780756832502]
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline.
Most previous works on VP approached the problem by planning in a learned latent space, resulting in low-quality visual plans.
Here, we propose a simple VP method that plans directly in image space and displays competitive performance.
arXiv Detail & Related papers (2020-02-27T18:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.