Cooperation on the Fly: Exploring Language Agents for Ad Hoc Teamwork in
the Avalon Game
- URL: http://arxiv.org/abs/2312.17515v1
- Date: Fri, 29 Dec 2023 08:26:54 GMT
- Title: Cooperation on the Fly: Exploring Language Agents for Ad Hoc Teamwork in
the Avalon Game
- Authors: Zijing Shi, Meng Fang, Shunfeng Zheng, Shilong Deng, Ling Chen, Yali
Du
- Abstract summary: This study focuses on the ad hoc teamwork problem where the agent operates in an environment driven by natural language.
Our findings reveal the potential of LLM agents in team collaboration, highlighting issues related to hallucinations in communication.
To address this issue, we develop CodeAct, a general agent that equips LLM with enhanced memory and code-driven reasoning.
- Score: 25.823665278297057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-agent collaboration with Large Language Models (LLMs) demonstrates
proficiency in basic tasks, yet its efficiency in more complex scenarios
remains unexplored. In gaming environments, these agents often face situations
without established coordination protocols, requiring them to make intelligent
inferences about teammates from limited data. This problem motivates the area
of ad hoc teamwork, in which an agent may potentially cooperate with a variety
of teammates to achieve a shared goal. Our study focuses on the ad hoc teamwork
problem where the agent operates in an environment driven by natural language.
Our findings reveal the potential of LLM agents in team collaboration,
highlighting issues related to hallucinations in communication. To address this
issue, we develop CodeAct, a general agent that equips LLM with enhanced memory
and code-driven reasoning, enabling the repurposing of partial information for
rapid adaptation to new teammates.
Related papers
- Evaluating and Enhancing LLMs Agent based on Theory of Mind in Guandan: A Multi-Player Cooperative Game under Imperfect Information [36.11862095329315]
Large language models (LLMs) have shown success in handling simple games with imperfect information.
This study investigates the applicability of knowledge acquired by open-source and API-based LLMs to sophisticated text-based games.
arXiv Detail & Related papers (2024-08-05T15:36:46Z) - Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation [52.930183136111864]
We propose using scorable negotiation to evaluate Large Language Models (LLMs)
To reach an agreement, agents must have strong arithmetic, inference, exploration, and planning capabilities.
We provide procedures to create new games and increase games' difficulty to have an evolving benchmark.
arXiv Detail & Related papers (2023-09-29T13:33:06Z) - MindAgent: Emergent Gaming Interaction [103.73707345211892]
Large Language Models (LLMs) have the capacity of performing complex scheduling in a multi-agent system.
We propose MindAgent to evaluate planning and coordination emergent capabilities for gaming interaction.
arXiv Detail & Related papers (2023-09-18T17:52:22Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Learning to Cooperate with Unseen Agent via Meta-Reinforcement Learning [4.060731229044571]
Ad hoc teamwork problem describes situations where an agent has to cooperate with previously unseen agents to achieve a common goal.
One could implement cooperative skills into an agent by using domain knowledge to design the agent's behavior.
We apply meta-reinforcement learning (meta-RL) formulation in the context of the ad hoc teamwork problem.
arXiv Detail & Related papers (2021-11-05T12:01:28Z) - Networked Multi-Agent Reinforcement Learning with Emergent Communication [18.47483427884452]
Multi-Agent Reinforcement Learning (MARL) methods find optimal policies for agents that operate in the presence of other learning agents.
One way to coordinate is by learning to communicate with each other.
Can the agents develop a language while learning to perform a common task?
arXiv Detail & Related papers (2020-04-06T16:13:23Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.