Investigating Agency of LLMs in Human-AI Collaboration Tasks
- URL: http://arxiv.org/abs/2305.12815v2
- Date: Thu, 8 Feb 2024 02:22:53 GMT
- Title: Investigating Agency of LLMs in Human-AI Collaboration Tasks
- Authors: Ashish Sharma, Sudha Rao, Chris Brockett, Akanksha Malhotra, Nebojsa
Jojic, Bill Dolan
- Abstract summary: We build on social-cognitive theory to develop a framework of features through which Agency is expressed in dialogue.
We collect a new dataset of 83 human-human collaborative interior design conversations.
- Score: 24.562034082480608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Agency, the capacity to proactively shape events, is central to how humans
interact and collaborate. While LLMs are being developed to simulate human
behavior and serve as human-like agents, little attention has been given to the
Agency that these models should possess in order to proactively manage the
direction of interaction and collaboration. In this paper, we investigate
Agency as a desirable function of LLMs, and how it can be measured and managed.
We build on social-cognitive theory to develop a framework of features through
which Agency is expressed in dialogue - indicating what you intend to do
(Intentionality), motivating your intentions (Motivation), having self-belief
in intentions (Self-Efficacy), and being able to self-adjust (Self-Regulation).
We collect a new dataset of 83 human-human collaborative interior design
conversations containing 908 conversational snippets annotated for Agency
features. Using this dataset, we develop methods for measuring Agency of LLMs.
Automatic and human evaluations show that models that manifest features
associated with high Intentionality, Motivation, Self-Efficacy, and
Self-Regulation are more likely to be perceived as strongly agentive.
Related papers
- Spontaneous Emergence of Agent Individuality through Social Interactions in LLM-Based Communities [0.0]
We study the emergence of agency from scratch by using Large Language Model (LLM)-based agents.
By analyzing this multi-agent simulation, we report valuable new insights into how social norms, cooperation, and personality traits can emerge spontaneously.
arXiv Detail & Related papers (2024-11-05T16:49:33Z) - Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Transforming Agency. On the mode of existence of Large Language Models [0.0]
This paper investigates the ontological characterization of Large Language Models (LLMs) like ChatGPT.
We argue that ChatGPT should be characterized as an interlocutor or linguistic automaton, a library-that-talks, devoid of (autonomous) agency.
arXiv Detail & Related papers (2024-07-15T14:01:35Z) - WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks [85.95607119635102]
Large language models (LLMs) can mimic human-like intelligence.
WorkArena++ is designed to evaluate the planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding abilities of web agents.
arXiv Detail & Related papers (2024-07-07T07:15:49Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - MetaAgents: Simulating Interactions of Human Behaviors for LLM-based
Task-oriented Coordination via Collaborative Generative Agents [27.911816995891726]
We introduce collaborative generative agents, endowing LLM-based Agents with consistent behavior patterns and task-solving abilities.
We propose a novel framework that equips collaborative generative agents with human-like reasoning abilities and specialized skills.
Our work provides valuable insights into the role and evolution of Large Language Models in task-oriented social simulations.
arXiv Detail & Related papers (2023-10-10T10:17:58Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - ToM2C: Target-oriented Multi-agent Communication and Cooperation with
Theory of Mind [18.85252946546942]
Theory of Mind (ToM) builds socially intelligent agents who are able to communicate and cooperate effectively.
We demonstrate the idea in two typical target-oriented multi-agent tasks: cooperative navigation and multi-sensor target coverage.
arXiv Detail & Related papers (2021-10-15T18:29:55Z) - Balancing Performance and Human Autonomy with Implicit Guidance Agent [8.071506311915396]
We show that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.
We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms.
arXiv Detail & Related papers (2021-09-01T14:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.