Learning to Cooperate with Unseen Agent via Meta-Reinforcement Learning
- URL: http://arxiv.org/abs/2111.03431v1
- Date: Fri, 5 Nov 2021 12:01:28 GMT
- Title: Learning to Cooperate with Unseen Agent via Meta-Reinforcement Learning
- Authors: Rujikorn Charakorn, Poramate Manoonpong, Nat Dilokthanakul
- Abstract summary: Ad hoc teamwork problem describes situations where an agent has to cooperate with previously unseen agents to achieve a common goal.
One could implement cooperative skills into an agent by using domain knowledge to design the agent's behavior.
We apply meta-reinforcement learning (meta-RL) formulation in the context of the ad hoc teamwork problem.
- Score: 4.060731229044571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ad hoc teamwork problem describes situations where an agent has to cooperate
with previously unseen agents to achieve a common goal. For an agent to be
successful in these scenarios, it has to have a suitable cooperative skill. One
could implement cooperative skills into an agent by using domain knowledge to
design the agent's behavior. However, in complex domains, domain knowledge
might not be available. Therefore, it is worthwhile to explore how to directly
learn cooperative skills from data. In this work, we apply meta-reinforcement
learning (meta-RL) formulation in the context of the ad hoc teamwork problem.
Our empirical results show that such a method could produce robust cooperative
agents in two cooperative environments with different cooperative
circumstances: social compliance and language interpretation. (This is a full
paper of the extended abstract version.)
Related papers
- Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Cooperation on the Fly: Exploring Language Agents for Ad Hoc Teamwork in
the Avalon Game [25.823665278297057]
This study focuses on the ad hoc teamwork problem where the agent operates in an environment driven by natural language.
Our findings reveal the potential of LLM agents in team collaboration, highlighting issues related to hallucinations in communication.
To address this issue, we develop CodeAct, a general agent that equips LLM with enhanced memory and code-driven reasoning.
arXiv Detail & Related papers (2023-12-29T08:26:54Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria [57.74495091445414]
Social deduction games offer an avenue to study how individuals might learn to synthesize potentially unreliable information about others.
In this work, we present Hidden Agenda, a two-team social deduction game that provides a 2D environment for studying learning agents in scenarios of unknown team alignment.
Reinforcement learning agents trained in Hidden Agenda show that agents can learn a variety of behaviors, including partnering and voting without need for communication in natural language.
arXiv Detail & Related papers (2022-01-05T20:54:10Z) - Interactive Inverse Reinforcement Learning for Cooperative Games [7.257751371276486]
We study the problem of designing AI agents that can learn to cooperate effectively with a potentially suboptimal partner.
This problem is modeled as a cooperative episodic two-agent Markov decision process.
We show that when the learning agent's policies have a significant effect on the transition function, the reward function can be learned efficiently.
arXiv Detail & Related papers (2021-11-08T18:24:52Z) - Behaviour-conditioned policies for cooperative reinforcement learning
tasks [41.74498230885008]
In various real-world tasks, an agent needs to cooperate with unknown partner agent types.
Deep reinforcement learning models can be trained to deliver the required functionality but are known to suffer from sample inefficiency and slow learning.
We suggest a method, where we synthetically produce populations of agents with different behavioural patterns together with ground truth data of their behaviour.
We additionally suggest an agent architecture, which can efficiently use the generated data and gain the meta-learning capability.
arXiv Detail & Related papers (2021-10-04T09:16:41Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.