CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society
- URL: http://arxiv.org/abs/2303.17760v2
- Date: Thu, 2 Nov 2023 17:34:57 GMT
- Title: CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society
- Authors: Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii
Khizbullin, Bernard Ghanem
- Abstract summary: This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
- Score: 58.04479313658851
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
Related papers
- COMMA: A Communicative Multimodal Multi-Agent Benchmark [7.831385481814481]
We introduce a novel benchmark designed to evaluate the collaborative performance of multimodal multi-agent systems through language communication.
By testing both agent-agent and agent-human collaborations using open-source and closed-source models, our findings reveal surprising weaknesses in state-of-the-art models.
arXiv Detail & Related papers (2024-10-10T02:49:47Z) - A Survey on Complex Tasks for Goal-Directed Interactive Agents [60.53915548970061]
This survey compiles relevant tasks and environments for evaluating goal-directed interactive agents.
An up-to-date compilation of relevant resources can be found on our project website.
arXiv Detail & Related papers (2024-09-27T08:17:53Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Verco: Learning Coordinated Verbal Communication for Multi-agent Reinforcement Learning [42.27106057372819]
We propose a novel multi-agent reinforcement learning algorithm that embeds large language models into agents.
The framework has a message module and an action module.
Experiments conducted on the Overcooked game demonstrate our method significantly enhances the learning efficiency and performance of existing methods.
arXiv Detail & Related papers (2024-04-27T05:10:33Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.