AI-Augmented Brainwriting: Investigating the use of LLMs in group
ideation
- URL: http://arxiv.org/abs/2402.14978v2
- Date: Thu, 29 Feb 2024 22:47:21 GMT
- Title: AI-Augmented Brainwriting: Investigating the use of LLMs in group
ideation
- Authors: Orit Shaer, Angelora Cooper, Osnat Mokryn, Andrew L. Kun, Hagit Ben
Shoshan
- Abstract summary: generative AI technologies such as large language models (LLMs) have significant implications for creative work.
This paper explores two aspects of integrating LLMs into the creative process - the divergence stage of idea generation, and the convergence stage of evaluation and selection of ideas.
We devised a collaborative group-AI Brainwriting ideation framework, which incorporated an LLM as an enhancement into the group ideation process.
- Score: 11.503226612030316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing availability of generative AI technologies such as large language
models (LLMs) has significant implications for creative work. This paper
explores twofold aspects of integrating LLMs into the creative process - the
divergence stage of idea generation, and the convergence stage of evaluation
and selection of ideas. We devised a collaborative group-AI Brainwriting
ideation framework, which incorporated an LLM as an enhancement into the group
ideation process, and evaluated the idea generation process and the resulted
solution space. To assess the potential of using LLMs in the idea evaluation
process, we design an evaluation engine and compared it to idea ratings
assigned by three expert and six novice evaluators. Our findings suggest that
integrating LLM in Brainwriting could enhance both the ideation process and its
outcome. We also provide evidence that LLMs can support idea evaluation. We
conclude by discussing implications for HCI education and practice.
Related papers
- Facilitating Holistic Evaluations with LLMs: Insights from Scenario-Based Experiments [0.32634122554914]
Adequate discussion is essential to integrate varied assessments.
Deriving an average score without discussion undermines the purpose of a holistic evaluation.
This paper explores the use of a Large Language Model (LLM) as a facilitator to integrate diverse faculty assessments.
arXiv Detail & Related papers (2024-05-28T01:07:06Z) - Decompose and Aggregate: A Step-by-Step Interpretable Evaluation Framework [75.81096662788254]
Large Language Models (LLMs) are scalable and economical evaluators.
The question of how reliable these evaluators are has emerged as a crucial research question.
We propose Decompose and Aggregate, which breaks down the evaluation process into different stages based on pedagogical practices.
arXiv Detail & Related papers (2024-05-24T08:12:30Z) - LLM Discussion: Enhancing the Creativity of Large Language Models via Discussion Framework and Role-Play [43.55248812883912]
Large language models (LLMs) have shown exceptional proficiency in natural language processing but often fall short of generating creative and original responses to open-ended questions.
We propose LLM Discussion, a three-phase discussion framework that facilitates vigorous and diverging idea exchanges and ensures convergence to creative answers.
We evaluate the efficacy of the proposed framework with the Alternative Uses Test, Similarities Test, Instances Test, and Scientific Creativity Test.
arXiv Detail & Related papers (2024-05-10T10:19:14Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by
Dissociating Language and Cognition [57.747888532651]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Assessing and Understanding Creativity in Large Language Models [33.37237667182931]
This paper aims to establish an efficient framework for assessing the level of creativity in large language models (LLMs)
By adapting the Torrance Tests of Creative Thinking, the research evaluates the creative performance of various LLMs across 7 tasks.
We found that the creativity of LLMs primarily falls short in originality, while excelling in elaboration.
arXiv Detail & Related papers (2024-01-23T05:19:47Z) - Evaluating Large Language Models at Evaluating Instruction Following [54.49567482594617]
We introduce a challenging meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM evaluator in discerning instruction-following outputs.
We discover that different evaluators exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement.
arXiv Detail & Related papers (2023-10-11T16:38:11Z) - Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration [83.4031923134958]
Corex is a suite of novel general-purpose strategies that transform Large Language Models into autonomous agents.
Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes.
We demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods.
arXiv Detail & Related papers (2023-09-30T07:11:39Z) - "It Felt Like Having a Second Mind": Investigating Human-AI
Co-creativity in Prewriting with Large Language Models [20.509651636971864]
This study investigates human-LLM collaboration patterns and dynamics during prewriting.
During collaborative prewriting, there appears to be a three-stage iterative Human-AI Co-creativity process.
arXiv Detail & Related papers (2023-07-20T16:55:25Z) - Iterative Forward Tuning Boosts In-Context Learning in Language Models [88.25013390669845]
In this study, we introduce a novel two-stage framework to boost in-context learning in large language models (LLMs)
Specifically, our framework delineates the ICL process into two distinct stages: Deep-Thinking and test stages.
The Deep-Thinking stage incorporates a unique attention mechanism, i.e., iterative enhanced attention, which enables multiple rounds of information accumulation.
arXiv Detail & Related papers (2023-05-22T13:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.