Generative Design through Quality-Diversity Data Synthesis and Language Models
- URL: http://arxiv.org/abs/2405.09997v1
- Date: Thu, 16 May 2024 11:30:08 GMT
- Title: Generative Design through Quality-Diversity Data Synthesis and Language Models
- Authors: Adam Gaier, James Stoddart, Lorenzo Villaggi, Shyam Sudhakaran,
- Abstract summary: Two fundamental challenges face generative models in engineering applications: the acquisition of high-performing, diverse datasets, and the adherence to precise constraints in generated designs.
We propose a novel approach combining optimization, constraint satisfaction, and language models to tackle these challenges in architectural design.
- Score: 5.196236145367301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two fundamental challenges face generative models in engineering applications: the acquisition of high-performing, diverse datasets, and the adherence to precise constraints in generated designs. We propose a novel approach combining optimization, constraint satisfaction, and language models to tackle these challenges in architectural design. Our method uses Quality-Diversity (QD) to generate a diverse, high-performing dataset. We then fine-tune a language model with this dataset to generate high-level designs. These designs are then refined into detailed, constraint-compliant layouts using the Wave Function Collapse algorithm. Our system demonstrates reliable adherence to textual guidance, enabling the generation of layouts with targeted architectural and performance features. Crucially, our results indicate that data synthesized through the evolutionary search of QD not only improves overall model performance but is essential for the model's ability to closely adhere to textual guidance. This improvement underscores the pivotal role evolutionary computation can play in creating the datasets key to training generative models for design. Web article at https://tilegpt.github.io
Related papers
- Exploring the Landscape for Generative Sequence Models for Specialized Data Synthesis [0.0]
This paper introduces a novel approach that leverages three generative models of varying complexity to synthesize Malicious Network Traffic.
Our approach transforms numerical data into text, re-framing data generation as a language modeling task.
Our method surpasses state-of-the-art generative models in producing high-fidelity synthetic data.
arXiv Detail & Related papers (2024-11-04T09:51:10Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Exploring the design space of deep-learning-based weather forecasting systems [56.129148006412855]
This paper systematically analyzes the impact of different design choices on deep-learning-based weather forecasting systems.
We study fixed-grid architectures such as UNet, fully convolutional architectures, and transformer-based models.
We propose a hybrid system that combines the strong performance of fixed-grid models with the flexibility of grid-invariant architectures.
arXiv Detail & Related papers (2024-10-09T22:25:50Z) - Data-Juicer Sandbox: A Comprehensive Suite for Multimodal Data-Model Co-development [67.55944651679864]
We present a novel sandbox suite tailored for integrated data-model co-development.
This sandbox provides a comprehensive experimental platform, enabling rapid iteration and insight-driven refinement of both data and models.
We also uncover fruitful insights gleaned from exhaustive benchmarks, shedding light on the critical interplay between data quality, diversity, and model behavior.
arXiv Detail & Related papers (2024-07-16T14:40:07Z) - Bridging Design Gaps: A Parametric Data Completion Approach With Graph Guided Diffusion Models [9.900586490845694]
This study introduces a generative imputation model leveraging graph attention networks and tabular diffusion models for completing missing parametric data in engineering designs.
We demonstrate our model significantly outperforms existing classical methods, such as MissForest, hotDeck, PPCA, and TabCSDI in both the accuracy and diversity of imputation options.
The graph model helps accurately capture and impute complex parametric interdependencies from an assembly graph, which is key for design problems.
arXiv Detail & Related papers (2024-06-17T16:03:17Z) - Text2VP: Generative AI for Visual Programming and Parametric Modeling [6.531561475204309]
This study creates and investigates an innovative application of generative AI in parametric modeling by leveraging a customized Text-to-Visual Programming (Text2VP) GPT derived from GPT-4.
The primary focus is on automating the generation of graph-based visual programming, including parameters and the links among the parameters, through AI-generated scripts.
Our testing demonstrates Text2VP's capability to generate working parametric models.
arXiv Detail & Related papers (2024-06-09T02:22:20Z) - PosterLLaVa: Constructing a Unified Multi-modal Layout Generator with LLM [58.67882997399021]
Our research introduces a unified framework for automated graphic layout generation.
Our data-driven method employs structured text (JSON format) and visual instruction tuning to generate layouts.
We conduct extensive experiments and achieved state-of-the-art (SOTA) performance on public multi-modal layout generation benchmarks.
arXiv Detail & Related papers (2024-06-05T03:05:52Z) - Diffusion Model for Data-Driven Black-Box Optimization [54.25693582870226]
We focus on diffusion models, a powerful generative AI technology, and investigate their potential for black-box optimization.
We study two practical types of labels: 1) noisy measurements of a real-valued reward function and 2) human preference based on pairwise comparisons.
Our proposed method reformulates the design optimization problem into a conditional sampling problem, which allows us to leverage the power of diffusion models.
arXiv Detail & Related papers (2024-03-20T00:41:12Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Towards Goal, Feasibility, and Diversity-Oriented Deep Generative Models
in Design [4.091593765662773]
We present the first Deep Generative Model that simultaneously optimize for performance, feasibility, diversity, and target achievement.
Methods are tested on a challenging multi-objective bicycle frame design problem with skewed, multimodal data of different datatypes.
arXiv Detail & Related papers (2022-06-14T20:57:23Z) - Design Target Achievement Index: A Differentiable Metric to Enhance Deep
Generative Models in Multi-Objective Inverse Design [4.091593765662773]
Design Target Achievement Index (DTAI) is a differentiable, tunable metric that scores a design's ability to achieve designer-specified minimum performance targets.
We apply DTAI to a Performance-Augmented Diverse GAN (PaDGAN) and demonstrate superior generative performance compared to a set of baseline Deep Generative Models.
arXiv Detail & Related papers (2022-05-06T04:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.