Generative Pre-Trained Transformer for Design Concept Generation: An
Exploration
- URL: http://arxiv.org/abs/2111.08489v1
- Date: Tue, 16 Nov 2021 14:12:08 GMT
- Title: Generative Pre-Trained Transformer for Design Concept Generation: An
Exploration
- Authors: Qihao Zhu, Jianxi Luo
- Abstract summary: This paper explores the uses of generative pre-trained transformers (GPT) for natural language design concept generation.
Our experiments involve the use of GPT-2 and GPT-3 for different creative reasonings in design tasks.
- Score: 6.233117407988574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Novel concepts are essential for design innovation and can be generated with
the aid of data stimuli and computers. However, current generative design
algorithms focus on diagrammatic or spatial concepts that are either too
abstract to understand or too detailed for early phase design exploration. This
paper explores the uses of generative pre-trained transformers (GPT) for
natural language design concept generation. Our experiments involve the use of
GPT-2 and GPT-3 for different creative reasonings in design tasks. Both show
reasonably good performance for verbal design concept generation.
Related papers
- Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive Design [6.001793288867721]
Design inspiration is crucial for establishing the direction of a design as well as evoking feelings and conveying meanings during the conceptual design process.
Many practice designers use text-based searches on platforms like Pinterest to gather image ideas, followed by sketching on paper or using digital tools to develop concepts.
Emerging generative AI techniques, such as diffusion models, offer a promising avenue to streamline these processes by swiftly generating design concepts based on text and image inspiration inputs.
arXiv Detail & Related papers (2024-06-06T17:04:14Z) - I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - DreamCreature: Crafting Photorealistic Virtual Creatures from
Imagination [140.1641573781066]
We introduce a novel task, Virtual Creatures Generation: Given a set of unlabeled images of the target concepts, we aim to train a T2I model capable of creating new, hybrid concepts.
We propose a new method called DreamCreature, which identifies and extracts the underlying sub-concepts.
The T2I thus adapts to generate novel concepts with faithful structures and photorealistic appearance.
arXiv Detail & Related papers (2023-11-27T01:24:31Z) - Human Machine Co-Creation. A Complementary Cognitive Approach to
Creative Character Design Process Using GANs [0.0]
Two neural networks compete to generate new visual contents indistinguishable from the original dataset.
The proposed approach aims to inform the process of perceiving, knowing, and making.
The machine generated concepts are used as a launching platform for character designers to conceptualize new characters.
arXiv Detail & Related papers (2023-11-23T12:18:39Z) - Using Text-to-Image Generation for Architectural Design Ideation [10.938191897918474]
This study is the first to investigate the potential of text-to-image generators in supporting creativity during the early stages of the architectural design process.
We conducted a laboratory study with 17 architecture students, who developed a concept for a culture center using three popular text-to-image generators.
arXiv Detail & Related papers (2023-04-20T09:46:27Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - Biologically Inspired Design Concept Generation Using Generative
Pre-Trained Transformers [13.852758740799452]
This paper proposes a generative design approach based on the generative pre-trained language model (PLM)
Three types of design concept generators are identified and fine-tuned from the PLM according to the looseness of the problem space representation.
The approach is evaluated and then employed in a real-world project of designing light-weighted flying cars.
arXiv Detail & Related papers (2022-12-26T16:06:04Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - Generative Transformers for Design Concept Generation [7.807713821263175]
This study explores the recent advance of the natural language generation (NLG) technique in the artificial intelligence (AI) field.
A novel approach utilizing the generative pre-trained transformer (GPT) is proposed to leverage the knowledge and reasoning from textual data.
Three concept generation tasks are defined to leverage different knowledge and reasoning: domain knowledge synthesis, problem-driven synthesis, and analogy-driven synthesis.
arXiv Detail & Related papers (2022-11-07T11:29:10Z) - Knowledge-Enhanced Personalized Review Generation with Capsule Graph
Neural Network [81.81662828017517]
We propose a knowledge-enhanced PRG model based on capsule graph neural network(Caps-GNN)
Our generation process contains two major steps, namely aspect sequence generation and sentence generation.
The incorporated knowledge graph is able to enhance user preference at both aspect and word levels.
arXiv Detail & Related papers (2020-10-04T03:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.