House-GAN++: Generative Adversarial Layout Refinement Networks
- URL: http://arxiv.org/abs/2103.02574v1
- Date: Wed, 3 Mar 2021 18:15:52 GMT
- Title: House-GAN++: Generative Adversarial Layout Refinement Networks
- Authors: Nelson Nauata, Sepidehsadat Hosseini, Kai-Hung Chang, Hang Chu,
Chin-Yi Cheng, Yasutaka Furukawa
- Abstract summary: Our architecture is an integration of a graph-constrained GAN and a conditional GAN, where a previously generated layout becomes the next input constraint.
A surprising discovery of our research is that a simple non-iterative training process, dubbed component-wise GT-conditioning, is effective in learning such a generator.
- Score: 37.60108582423617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel generative adversarial layout refinement network
for automated floorplan generation. Our architecture is an integration of a
graph-constrained relational GAN and a conditional GAN, where a previously
generated layout becomes the next input constraint, enabling iterative
refinement. A surprising discovery of our research is that a simple
non-iterative training process, dubbed component-wise GT-conditioning, is
effective in learning such a generator. The iterative generator also creates a
new opportunity in further improving a metric of choice via meta-optimization
techniques by controlling when to pass which input constraints during iterative
layout refinement. Our qualitative and quantitative evaluation based on the
three standard metrics demonstrate that the proposed system makes significant
improvements over the current state-of-the-art, even competitive against the
ground-truth floorplans, designed by professional architects.
Related papers
- Towards Automated Machine Learning Research [4.169915659794567]
This paper explores a top-down approach to automating incremental advances in machine learning research through component-level innovation.
Our framework systematically generates novel components, validates their feasibility, and evaluates their performance against existing baselines.
By incorporating a reward model to prioritize promising hypotheses, we aim to improve the efficiency of the hypothesis generation and evaluation process.
arXiv Detail & Related papers (2024-09-09T00:47:30Z) - DecompOpt: Controllable and Decomposed Diffusion Models for Structure-based Molecular Optimization [49.85944390503957]
DecompOpt is a structure-based molecular optimization method based on a controllable and diffusion model.
We show that DecompOpt can efficiently generate molecules with improved properties than strong de novo baselines.
arXiv Detail & Related papers (2024-03-07T02:53:40Z) - Generative Structural Design Integrating BIM and Diffusion Model [4.619347136761891]
This study introduces building information modeling ( BIM) into intelligent structural design and establishes a structural design pipeline integrating BIM and generative AI.
In terms of generation framework, inspired by the process of human drawing, a novel 2-stage generation framework is proposed to reduce the generation difficulty for AI models.
In terms of generative AI tools adopted, diffusion models (DMs) are introduced to replace widely used generative adversarial network (GAN)-based models, and a novel physics-based conditional diffusion model (PCDM) is proposed to consider different design prerequisites.
arXiv Detail & Related papers (2023-11-07T15:05:19Z) - Rethinking Decision Transformer via Hierarchical Reinforcement Learning [54.3596066989024]
Decision Transformer (DT) is an innovative algorithm leveraging recent advances of the transformer architecture in reinforcement learning (RL)
We introduce a general sequence modeling framework for studying sequential decision making through the lens of Hierarchical RL.
We show DT emerges as a special case of this framework with certain choices of high-level and low-level policies, and discuss the potential failure of these choices.
arXiv Detail & Related papers (2023-11-01T03:32:13Z) - Nonlinear MPC design for incrementally ISS systems with application to
GRU networks [0.0]
This brief addresses the design of a Model Predictive Control (NMPC) strategy for exponentially incremental Input-to-State Stable (ISS) systems.
The designed methodology is particularly suited for the control of systems learned by Recurrent Neural Networks (RNNs)
The approach is applied to Gated Recurrent Unit (GRU) networks, providing also a method for the design of a tailored state observer with convergence guarantees.
arXiv Detail & Related papers (2023-09-28T13:26:20Z) - End-to-end Graph-constrained Vectorized Floorplan Generation with
Panoptic Refinement [16.103152098205566]
We aim to synthesize floorplans as sequences of 1-D vectors, which eases user interaction and design customization.
In the first stage, we encode the room connectivity graph input by users with a graphal network (GCN), then apply an autoregressive transformer network to generate an initial floorplan sequence.
To polish the initial design and generate more visually appealing floorplans, we further propose a novel panoptic refinement network(PRN) composed of a GCN and a transformer network.
arXiv Detail & Related papers (2022-07-27T03:19:20Z) - Topic-Controllable Summarization: Topic-Aware Evaluation and Transformer Methods [4.211128681972148]
Topic-controllable summarization is an emerging research area with a wide range of potential applications.
This work proposes a new topic-oriented evaluation measure to automatically evaluate the generated summaries.
In addition, we adapt topic embeddings to work with powerful Transformer architectures and propose a novel and efficient approach for guiding the summary generation through control tokens.
arXiv Detail & Related papers (2022-06-09T07:28:16Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - Dynamically Grown Generative Adversarial Networks [111.43128389995341]
We propose a method to dynamically grow a GAN during training, optimizing the network architecture and its parameters together with automation.
The method embeds architecture search techniques as an interleaving step with gradient-based training to periodically seek the optimal architecture-growing strategy for the generator and discriminator.
arXiv Detail & Related papers (2021-06-16T01:25:51Z) - House-GAN: Relational Generative Adversarial Networks for
Graph-constrained House Layout Generation [59.86153321871127]
The main idea is to encode the constraint into the graph structure of its relational networks.
We have demonstrated the proposed architecture for a new house layout generation problem.
arXiv Detail & Related papers (2020-03-16T03:16:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.