Exploring the Capabilities of Large Language Models for Generating Diverse Design Solutions
- URL: http://arxiv.org/abs/2405.02345v1
- Date: Thu, 2 May 2024 14:20:04 GMT
- Title: Exploring the Capabilities of Large Language Models for Generating Diverse Design Solutions
- Authors: Kevin Ma, Daniele Grandi, Christopher McComb, Kosa Goucher-Lambert,
- Abstract summary: This paper explores the efficacy of large language models (LLM) in producing diverse design solutions.
LLM-generated solutions are compared against 100 human-crowdsourced solutions in each design topic.
There is a divide in some design topics between humans and LLM-generated solutions, while others have no clear divide.
- Score: 0.8514062145382637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Access to large amounts of diverse design solutions can support designers during the early stage of the design process. In this paper, we explore the efficacy of large language models (LLM) in producing diverse design solutions, investigating the level of impact that parameter tuning and various prompt engineering techniques can have on the diversity of LLM-generated design solutions. Specifically, LLMs are used to generate a total of 4,000 design solutions across five distinct design topics, eight combinations of parameters, and eight different types of prompt engineering techniques, comparing each combination of parameter and prompt engineering method across four different diversity metrics. LLM-generated solutions are compared against 100 human-crowdsourced solutions in each design topic using the same set of diversity metrics. Results indicate that human-generated solutions consistently have greater diversity scores across all design topics. Using a post hoc logistic regression analysis we investigate whether these differences primarily exist at the semantic level. Results show that there is a divide in some design topics between humans and LLM-generated solutions, while others have no clear divide. Taken together, these results contribute to the understanding of LLMs' capabilities in generating a large volume of diverse design solutions and offer insights for future research that leverages LLMs to generate diverse design solutions for a broad range of design tasks (e.g., inspirational stimuli).
Related papers
- Meta Reasoning for Large Language Models [58.87183757029041]
We introduce Meta-Reasoning Prompting (MRP), a novel and efficient system prompting method for large language models (LLMs)
MRP guides LLMs to dynamically select and apply different reasoning methods based on the specific requirements of each task.
We evaluate the effectiveness of MRP through comprehensive benchmarks.
arXiv Detail & Related papers (2024-06-17T16:14:11Z) - Sample Design Engineering: An Empirical Study of What Makes Good Downstream Fine-Tuning Samples for LLMs [23.766782325052418]
This paper introduces Sample Design Engineering (SDE), a methodical approach to enhancing Large Language Models' post-tuning performance.
We conduct a series of in-domain (ID) and out-of-domain (OOD) experiments to assess the impact of various design options on LLMs' downstream performance.
We propose an integrated SDE strategy, combining the most effective options, and validate its consistent superiority over sample designs in complex downstream tasks.
arXiv Detail & Related papers (2024-04-19T17:47:02Z) - Creative and Correct: Requesting Diverse Code Solutions from AI Foundation Models [8.40868688916685]
In software engineering tasks, diversity is key to exploring design spaces and fostering creativity.
Our study systematically investigates this trade-off using experiments with HumanEval tasks.
We identify combinations of parameters and strategies that strike an optimal balance between diversity and correctness.
arXiv Detail & Related papers (2024-03-20T02:51:46Z) - Towards Multi-Objective High-Dimensional Feature Selection via
Evolutionary Multitasking [63.91518180604101]
This paper develops a novel EMT framework for high-dimensional feature selection problems, namely MO-FSEMT.
A task-specific knowledge transfer mechanism is designed to leverage the advantage information of each task, enabling the discovery and effective transmission of high-quality solutions.
arXiv Detail & Related papers (2024-01-03T06:34:39Z) - Mixed-Variable Global Sensitivity Analysis For Knowledge Discovery And
Efficient Combinatorial Materials Design [15.840852223499303]
We develop the first metamodel-based mixed-variable GSA method.
Through numerical case studies, we validate and demonstrate the effectiveness of our proposed method for mixed-variable problems.
Our method can utilize sensitivity analysis to navigate the optimization in the many-level large design space.
arXiv Detail & Related papers (2023-10-23T17:29:53Z) - On the Performance of Multimodal Language Models [4.677125897916577]
This study conducts a comparative analysis of different multimodal instruction tuning approaches.
We reveal key insights for guiding architectural choices when incorporating multimodal capabilities into large language models.
arXiv Detail & Related papers (2023-10-04T23:33:36Z) - Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration [83.4031923134958]
Corex is a suite of novel general-purpose strategies that transform Large Language Models into autonomous agents.
Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes.
We demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods.
arXiv Detail & Related papers (2023-09-30T07:11:39Z) - Conceptual Design Generation Using Large Language Models [0.34998703934432673]
Large Language Models (LLMs) can generate seemingly creative outputs from textual prompts.
In this paper, we leverage LLMs to generate solutions for a set of 12 design problems and compare them to a baseline of crowdsourced solutions.
Expert evaluations indicate that the LLM-generated solutions have higher average feasibility and usefulness.
We experiment with prompt engineering and find that leveraging few-shot learning can lead to the generation of solutions that are more similar to the crowdsourced solutions.
arXiv Detail & Related papers (2023-05-30T19:32:39Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - Discovering Diverse Solutions in Deep Reinforcement Learning [84.45686627019408]
Reinforcement learning algorithms are typically limited to learning a single solution of a specified task.
We propose an RL method that can learn infinitely many solutions by training a policy conditioned on a continuous or discrete low-dimensional latent variable.
arXiv Detail & Related papers (2021-03-12T04:54:31Z) - Pareto Multi-Task Learning [53.90732663046125]
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously.
It is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other.
Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization.
arXiv Detail & Related papers (2019-12-30T08:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.