Generative Design Ideation: A Natural Language Generation Approach
- URL: http://arxiv.org/abs/2204.09658v1
- Date: Mon, 28 Mar 2022 08:11:29 GMT
- Title: Generative Design Ideation: A Natural Language Generation Approach
- Authors: Qihao Zhu and Jianxi Luo
- Abstract summary: This paper aims to explore a generative approach for knowledge-based design ideation by applying the latest pre-trained language models in artificial intelligence (AI)
The AI-generated ideas are not only in concise and understandable language but also able to synthesize the target design with external knowledge sources with controllable knowledge distance.
- Score: 7.807713821263175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper aims to explore a generative approach for knowledge-based design
ideation by applying the latest pre-trained language models in artificial
intelligence (AI). Specifically, a method of fine-tuning the generative
pre-trained transformer using the USPTO patent database is proposed. The
AI-generated ideas are not only in concise and understandable language but also
able to synthesize the target design with external knowledge sources with
controllable knowledge distance. The method is tested in a case study of
rolling toy design and the results show good performance in generating ideas of
varied novelty with near-field and far-field source knowledge.
Related papers
- A Novel Idea Generation Tool using a Structured Conversational AI (CAI) System [0.0]
This paper presents a novel conversational AI-enabled active ideation interface as a creative idea-generation tool to assist novice designers.
It is a dynamic, interactive, and contextually responsive approach, actively involving a large language model (LLM) from the domain of natural language processing (NLP) in artificial intelligence (AI)
Integrating such AI models with ideation creates what we refer to as an Active Ideation scenario, which helps foster continuous dialogue-based interaction, context-sensitive conversation, and prolific idea generation.
arXiv Detail & Related papers (2024-09-09T16:02:27Z) - Who Writes the Review, Human or AI? [0.36498648388765503]
This study proposes a methodology to accurately distinguish AI-generated and human-written book reviews.
Our approach utilizes transfer learning, enabling the model to identify generated text across different topics.
The experimental results demonstrate that it is feasible to detect the original source of text, achieving an accuracy rate of 96.86%.
arXiv Detail & Related papers (2024-05-30T17:38:44Z) - LB-KBQA: Large-language-model and BERT based Knowledge-Based Question
and Answering System [7.626368876843794]
We propose a novel KBQA system based on a Large Language Model(LLM) and BERT (LB-KBQA)
With the help of generative AI, our proposed method could detect newly appeared intent and acquire new knowledge.
In experiments on financial domain question answering, our model has demonstrated superior effectiveness.
arXiv Detail & Related papers (2024-02-05T16:47:17Z) - Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation [79.22678026708134]
In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
arXiv Detail & Related papers (2023-10-12T06:36:41Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Generative Transformers for Design Concept Generation [7.807713821263175]
This study explores the recent advance of the natural language generation (NLG) technique in the artificial intelligence (AI) field.
A novel approach utilizing the generative pre-trained transformer (GPT) is proposed to leverage the knowledge and reasoning from textual data.
Three concept generation tasks are defined to leverage different knowledge and reasoning: domain knowledge synthesis, problem-driven synthesis, and analogy-driven synthesis.
arXiv Detail & Related papers (2022-11-07T11:29:10Z) - Generative Pre-Trained Transformers for Biologically Inspired Design [13.852758740799452]
This paper proposes a generative design approach based on the pre-trained language model (PLM)
Three types of design concept generators are identified and fine-tuned from the PLM according to the looseness of the problem space representation.
The approach is then tested via a case study in which the fine-tuned models are applied to generate and evaluate light-weighted flying car concepts inspired by nature.
arXiv Detail & Related papers (2022-03-31T11:13:22Z) - Kformer: Knowledge Injection in Transformer Feed-Forward Layers [107.71576133833148]
We propose a novel knowledge fusion model, namely Kformer, which incorporates external knowledge through the feed-forward layer in Transformer.
We empirically find that simply injecting knowledge into FFN can facilitate the pre-trained language model's ability and facilitate current knowledge fusion methods.
arXiv Detail & Related papers (2022-01-15T03:00:27Z) - KAT: A Knowledge Augmented Transformer for Vision-and-Language [56.716531169609915]
We propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result on the open-domain multimodal task of OK-VQA.
Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation.
An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
arXiv Detail & Related papers (2021-12-16T04:37:10Z) - Knowledge-Grounded Dialogue Generation with Pre-trained Language Models [74.09352261943911]
We study knowledge-grounded dialogue generation with pre-trained language models.
We propose equipping response generation defined by a pre-trained language model with a knowledge selection module.
arXiv Detail & Related papers (2020-10-17T16:49:43Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.