Enhancing Creativity in Large Language Models through Associative Thinking Strategies
- URL: http://arxiv.org/abs/2405.06715v1
- Date: Thu, 9 May 2024 16:42:29 GMT
- Title: Enhancing Creativity in Large Language Models through Associative Thinking Strategies
- Authors: Pronita Mehrotra, Aishni Parab, Sumit Gulwani,
- Abstract summary: Associative thinking strategies have been found to help humans boost creativity.
We investigate whether prompting Large Language Models to connect disparate concepts can augment their creative outputs.
Our findings show that leveraging associative thinking techniques can significantly improve the originality of vGPT-4's responses.
- Score: 9.09055730592338
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper explores the enhancement of creativity in Large Language Models (LLMs) like vGPT-4 through associative thinking, a cognitive process where creative ideas emerge from linking seemingly unrelated concepts. Associative thinking strategies have been found to effectively help humans boost creativity. However, whether the same strategies can help LLMs become more creative remains under-explored. In this work, we investigate whether prompting LLMs to connect disparate concepts can augment their creative outputs. Focusing on three domains -- Product Design, Storytelling, and Marketing -- we introduce creativity tasks designed to assess vGPT-4's ability to generate original and useful content. By challenging the models to form novel associations, we evaluate the potential of associative thinking to enhance the creative capabilities of LLMs. Our findings show that leveraging associative thinking techniques can significantly improve the originality of vGPT-4's responses.
Related papers
- A Causality-aware Paradigm for Evaluating Creativity of Multimodal Large Language Models [100.16387798660833]
Oogiri game is a creativity-driven task requiring humor and associative thinking.
LoTbench is an interactive, causality-aware evaluation framework.
Results show that while most LLMs exhibit constrained creativity, the performance gap between LLMs and humans is not insurmountable.
arXiv Detail & Related papers (2025-01-25T09:11:15Z) - Imagine while Reasoning in Space: Multimodal Visualization-of-Thought [70.74453180101365]
Chain-of-Thought (CoT) prompting has proven highly effective for enhancing complex reasoning in Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs)
We propose a new reasoning paradigm, Multimodal Visualization-of-Thought (MVoT)
It enables visual thinking in MLLMs by generating image visualizations of their reasoning traces.
arXiv Detail & Related papers (2025-01-13T18:23:57Z) - LLMs can Realize Combinatorial Creativity: Generating Creative Ideas via LLMs for Scientific Research [5.564972490390789]
We present a framework that explicitly implements creativity theory using Large Language Models (LLMs)
The framework features a generalization-level retrieval system for cross-domain knowledge discovery and a structured process for idea generation.
Experiments on the OAG-Bench dataset demonstrate our framework's effectiveness, consistently outperforming baseline approaches in generating ideas that align with real research developments.
arXiv Detail & Related papers (2024-12-18T18:41:14Z) - A Framework for Collaborating a Large Language Model Tool in Brainstorming for Triggering Creative Thoughts [2.709166684084394]
This study proposes a framework called GPS, which employs goals, prompts, and strategies to guide designers to systematically work with an LLM tool for improving the creativity of ideas generated during brainstorming.
Our framework, tested through a design example and a case study, demonstrates its effectiveness in stimulating creativity and its seamless LLM tool integration into design practices.
arXiv Detail & Related papers (2024-10-10T13:39:27Z) - Benchmarking Language Model Creativity: A Case Study on Code Generation [39.546827184857754]
In this work, we introduce a framework for quantifying LLM creativity.
We define NEOGAUGE, a metric that quantifies both convergent and divergent thinking in the generated creative responses.
We test the proposed framework on Codeforces problems, which serve as both a natural dataset for coding tasks and a collection of prior human solutions.
arXiv Detail & Related papers (2024-07-12T05:55:22Z) - Divergent Creativity in Humans and Large Language Models [37.67363469600804]
The recent surge in the capabilities of Large Language Models has led to claims that they are approaching a level of creativity akin to human capabilities.
We leverage recent advances in creativity science to build a framework for in-depth analysis of divergent creativity in both state-of-the-art LLMs and a substantial dataset of 100,000 humans.
arXiv Detail & Related papers (2024-05-13T22:37:52Z) - Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models [71.93366651585275]
Large language models (LLMs) have exhibited impressive performance in language comprehension and various reasoning tasks.
We propose Visualization-of-Thought (VoT) to elicit spatial reasoning of LLMs by visualizing their reasoning traces.
VoT significantly enhances the spatial reasoning abilities of LLMs.
arXiv Detail & Related papers (2024-04-04T17:45:08Z) - Assessing and Understanding Creativity in Large Language Models [33.37237667182931]
This paper aims to establish an efficient framework for assessing the level of creativity in large language models (LLMs)
By adapting the Torrance Tests of Creative Thinking, the research evaluates the creative performance of various LLMs across 7 tasks.
We found that the creativity of LLMs primarily falls short in originality, while excelling in elaboration.
arXiv Detail & Related papers (2024-01-23T05:19:47Z) - Can AI Be as Creative as Humans? [84.43873277557852]
We prove in theory that AI can be as creative as humans under the condition that it can properly fit the data generated by human creators.
The debate on AI's creativity is reduced into the question of its ability to fit a sufficient amount of data.
arXiv Detail & Related papers (2024-01-03T08:49:12Z) - Telling Creative Stories Using Generative Visual Aids [52.623545341588304]
We asked writers to write creative stories from a starting prompt, and provided them with visuals created by generative AI models from the same prompt.
Compared to a control group, writers who used the visuals as story writing aid wrote significantly more creative, original, complete and visualizable stories.
Findings indicate that cross modality inputs by AI can benefit divergent aspects of creativity in human-AI co-creation, but hinders convergent thinking.
arXiv Detail & Related papers (2021-10-27T23:13:47Z) - Explaining Creative Artifacts [69.86890599471202]
We develop an inverse problem formulation to deconstruct the products of and compositional creativity into associative chains.
In particular, our formulation is structured as solving a traveling salesman problem through a knowledge graph of associative elements.
arXiv Detail & Related papers (2020-10-14T14:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.