Learning interactions to boost human creativity with bandits and GPT-4
- URL: http://arxiv.org/abs/2311.10127v1
- Date: Thu, 16 Nov 2023 16:53:17 GMT
- Title: Learning interactions to boost human creativity with bandits and GPT-4
- Authors: Ara Vartanian, Xiaoxi Sun, Yun-Shiuan Chuang, Siddharth Suresh,
Xiaojin Zhu, Timothy T. Rogers
- Abstract summary: We employ a psychological task that demonstrates limits on human creativity, namely semantic feature generation.
In experiments with humans and with a language AI (GPT-4) we contrast behavior in the standard task versus a variant in which participants can ask for algorithmically-generated hints.
Humans and the AI show similar benefits from hints, and remarkably, bandits learning from AI responses prefer the same prompting strategy as those learning from human behavior.
- Score: 10.817205577415434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers how interactions with AI algorithms can boost human
creative thought. We employ a psychological task that demonstrates limits on
human creativity, namely semantic feature generation: given a concept name,
respondents must list as many of its features as possible. Human participants
typically produce only a fraction of the features they know before getting
"stuck." In experiments with humans and with a language AI (GPT-4) we contrast
behavior in the standard task versus a variant in which participants can ask
for algorithmically-generated hints. Algorithm choice is administered by a
multi-armed bandit whose reward indicates whether the hint helped generating
more features. Humans and the AI show similar benefits from hints, and
remarkably, bandits learning from AI responses prefer the same prompting
strategy as those learning from human behavior. The results suggest that
strategies for boosting human creativity via computer interactions can be
learned by bandits run on groups of simulated participants.
Related papers
- Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Fiper: a Visual-based Explanation Combining Rules and Feature Importance [3.2982707161882967]
Explainable Artificial Intelligence aims to design tools and techniques to illustrate the predictions of the so-called black-box algorithms.
This paper proposes a visual-based method to illustrate rules paired with feature importance.
arXiv Detail & Related papers (2024-04-25T09:15:54Z) - Dynamic Explanation Emphasis in Human-XAI Interaction with Communication Robot [2.6396287656676725]
DynEmph is a method for a communication robot to decide where to emphasize XAI-generated explanations with physical expressions.
It predicts the effect of emphasizing certain points on a user and aims to minimize the expected difference between predicted user decisions and AI-suggested ones.
arXiv Detail & Related papers (2024-03-21T16:50:12Z) - Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration [116.09561564489799]
Solo Performance Prompting transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas.
A cognitive synergist is an intelligent agent that collaboratively combines multiple minds' strengths and knowledge to enhance problem-solving in complex tasks.
Our in-depth analysis shows that assigning multiple fine-grained personas in LLMs improves problem-solving abilities compared to using a single or fixed number of personas.
arXiv Detail & Related papers (2023-07-11T14:45:19Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - Instructive artificial intelligence (AI) for human training, assistance,
and explainability [0.24629531282150877]
We show how a neural network might instruct human trainees as an alternative to traditional approaches to explainable AI (XAI)
An AI examines human actions and calculates variations on the human strategy that lead to better performance.
Results will be presented on AI instruction's ability to improve human decision-making and human-AI teaming in Hanabi.
arXiv Detail & Related papers (2021-11-02T16:46:46Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Learning Human Rewards by Inferring Their Latent Intelligence Levels in
Multi-Agent Games: A Theory-of-Mind Approach with Application to Driving Data [18.750834997334664]
We argue that humans are bounded rational and have different intelligence levels when reasoning about others' decision-making process.
We propose a new multi-agent Inverse Reinforcement Learning framework that reasons about humans' latent intelligence levels during learning.
arXiv Detail & Related papers (2021-03-07T07:48:31Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Cognitive Anthropomorphism of AI: How Humans and Computers Classify
Images [0.0]
Humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence.
This mismatch presents an obstacle to appropriate human-AI interaction.
I offer three strategies for system design that can address the mismatch between human and AI classification.
arXiv Detail & Related papers (2020-02-07T21:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.