One Agent Too Many: User Perspectives on Approaches to Multi-agent
Conversational AI
- URL: http://arxiv.org/abs/2401.07123v1
- Date: Sat, 13 Jan 2024 17:30:57 GMT
- Title: One Agent Too Many: User Perspectives on Approaches to Multi-agent
Conversational AI
- Authors: Christopher Clarke, Karthik Krishnamurthy, Walter Talamonti, Yiping
Kang, Lingjia Tang, Jason Mars
- Abstract summary: We show that users have a significant preference for abstracting agent orchestration in both system usability and system performance.
We demonstrate that this mode of interaction is able to provide quality responses that are rated within 1% of human-selected answers.
- Score: 10.825570464035872
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conversational agents have been gaining increasing popularity in recent
years. Influenced by the widespread adoption of task-oriented agents such as
Apple Siri and Amazon Alexa, these agents are being deployed into various
applications to enhance user experience. Although these agents promote "ask me
anything" functionality, they are typically built to focus on a single or
finite set of expertise. Given that complex tasks often require more than one
expertise, this results in the users needing to learn and adopt multiple
agents. One approach to alleviate this is to abstract the orchestration of
agents in the background. However, this removes the option of choice and
flexibility, potentially harming the ability to complete tasks. In this paper,
we explore these different interaction experiences (one agent for all) vs (user
choice of agents) for conversational AI. We design prototypes for each,
systematically evaluating their ability to facilitate task completion. Through
a series of conducted user studies, we show that users have a significant
preference for abstracting agent orchestration in both system usability and
system performance. Additionally, we demonstrate that this mode of interaction
is able to provide quality responses that are rated within 1% of human-selected
answers.
Related papers
- Multi-Agents are Social Groups: Investigating Social Influence of Multiple Agents in Human-Agent Interactions [7.421573539569854]
We investigate whether a group of AI agents can create social pressure on users to agree with them.
We found that conversing with multiple agents increased the social pressure felt by participants.
Our study shows the potential advantages of multi-agent systems over single-agent platforms in causing opinion change.
arXiv Detail & Related papers (2024-11-07T10:00:46Z) - A Survey on Complex Tasks for Goal-Directed Interactive Agents [60.53915548970061]
This survey compiles relevant tasks and environments for evaluating goal-directed interactive agents.
An up-to-date compilation of relevant resources can be found on our project website.
arXiv Detail & Related papers (2024-09-27T08:17:53Z) - AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Conveying Autonomous Robot Capabilities through Contrasting Behaviour
Summaries [8.413049356622201]
We present an adaptive search method for efficiently generating contrasting behaviour summaries.
Our results indicate that adaptive search can efficiently identify informative contrasting scenarios that enable humans to accurately select the better performing agent.
arXiv Detail & Related papers (2023-04-01T18:20:59Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - One Agent To Rule Them All: Towards Multi-agent Conversational AI [6.285901070328973]
We introduce a new task BBAI: Black-Box Agent Integration, focusing on combining the capabilities of multiple black-box CAs at scale.
We explore two techniques: question agent pairing and question response pairing aimed at resolving this task.
We demonstrate that OFA is able to automatically and accurately integrate an ensemble of commercially available CAs spanning disparate domains.
arXiv Detail & Related papers (2022-03-15T06:07:17Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.