Beyond Following: Mixing Active Initiative into Computational Creativity
- URL: http://arxiv.org/abs/2409.16291v1
- Date: Fri, 6 Sep 2024 18:56:08 GMT
- Title: Beyond Following: Mixing Active Initiative into Computational Creativity
- Authors: Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth, Mark Riedl,
- Abstract summary: This study investigates the influence of an active and learning AI agent on creators' expectancy of creative responsibilities.
We develop a Multi-armed-bandit agent that learns from the human creator, updates its collaborative decision-making belief, and switches between its capabilities.
- Score: 7.366868731714772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Artificial Intelligence (AI) encounters limitations in efficiency and fairness within the realm of Procedural Content Generation (PCG) when human creators solely drive and bear responsibility for the generative process. Alternative setups, such as Mixed-Initiative Co-Creative (MI-CC) systems, exhibited their promise. Still, the potential of an active mixed initiative, where AI takes a role beyond following, is understudied. This work investigates the influence of the adaptive ability of an active and learning AI agent on creators' expectancy of creative responsibilities in an MI-CC setting. We built and studied a system that employs reinforcement learning (RL) methods to learn the creative responsibility preferences of a human user during online interactions. Situated in story co-creation, we develop a Multi-armed-bandit agent that learns from the human creator, updates its collaborative decision-making belief, and switches between its capabilities during an MI-CC experience. With 39 participants joining a human subject study, Our developed system's learning capabilities are well recognized compared to the non-learning ablation, corresponding to a significant increase in overall satisfaction with the MI-CC experience. These findings indicate a robust association between effective MI-CC collaborative interactions, particularly the implementation of proactive AI initiatives, and deepened understanding among all participants.
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Empowering Large Language Model Agents through Action Learning [85.39581419680755]
Large Language Model (LLM) Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error.
We argue that the capacity to learn new actions from experience is fundamental to the advancement of learning in LLM agents.
We introduce a framework LearnAct with an iterative learning strategy to create and improve actions in the form of Python functions.
arXiv Detail & Related papers (2024-02-24T13:13:04Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - Modeling Resilience of Collaborative AI Systems [1.869472599236422]
Collaborative Artificial Intelligence System (CAIS) performs actions in collaboration with the human to achieve a common goal.
CAISs can use a trained AI model to control human-system interaction, or they can use human interaction to dynamically learn from humans in an online fashion.
In online learning with human feedback, the AI model evolves by monitoring human interaction through the system sensors in the learning state.
Any disruptive event affecting these sensors may affect the AI model's ability to make accurate decisions and degrade the CAIS performance.
arXiv Detail & Related papers (2024-01-23T10:28:33Z) - Collaborative Learning with Artificial Intelligence Speakers (CLAIS):
Pre-Service Elementary Science Teachers' Responses to the Prototype [0.5113447003407372]
The CLAIS system is designed to have 3-4 human learners join an AI speaker to form a small group, where humans and AI are considered as peers participating in the Jigsaw learning process.
The CLAIS system was successfully implemented in a Science Education course session with 15 pre-service elementary science teachers.
arXiv Detail & Related papers (2023-12-20T01:19:03Z) - Progressively Efficient Learning [58.6490456517954]
We develop a novel learning framework named Communication-Efficient Interactive Learning (CEIL)
CEIL leads to emergence of a human-like pattern where the learner and the teacher communicate efficiently by exchanging increasingly more abstract intentions.
Agents trained with CEIL quickly master new tasks, outperforming non-hierarchical and hierarchical imitation learning by up to 50% and 20% in absolute success rate.
arXiv Detail & Related papers (2023-10-13T07:52:04Z) - Learning in Cooperative Multiagent Systems Using Cognitive and Machine
Models [1.0742675209112622]
Multi-Agent Systems (MAS) are critical for many applications requiring collaboration and coordination with humans.
One major challenge is the simultaneous learning and interaction of independent agents in dynamic environments.
We propose three variants of Multi-Agent IBL models (MAIBL)
We demonstrate that the MAIBL models exhibit faster learning and achieve better coordination in a dynamic CMOTP task with various settings of rewards compared to current MADRL models.
arXiv Detail & Related papers (2023-08-18T00:39:06Z) - Team Learning as a Lens for Designing Human-AI Co-Creative Systems [12.24664973838839]
Generative, ML-driven interactive systems have the potential to change how people interact with computers in creative processes.
It is still unclear how we might achieve effective human-AI collaboration in open-ended task domains.
arXiv Detail & Related papers (2022-07-06T22:11:13Z) - Towards Effective Human-AI Collaboration in GUI-Based Interactive Task
Learning Agents [29.413358312233253]
We argue that a key challenge in enabling usable and useful interactive task learning for intelligent agents is to facilitate effective Human-AI collaboration.
We reflect on our past 5 years of efforts on designing, developing and studying the SUGILITE system.
arXiv Detail & Related papers (2020-03-05T14:12:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.