Learning to Infer Belief Embedded Communication
- URL: http://arxiv.org/abs/2203.07832v1
- Date: Tue, 15 Mar 2022 12:42:10 GMT
- Title: Learning to Infer Belief Embedded Communication
- Authors: Guo Ye and Han Liu and Biswa Sengupta
- Abstract summary: This paper introduces a novel algorithm to mimic an agent's language learning ability.
It contains a perception module for decoding other agents' intentions in response to their past actions.
It also includes a language generation module for learning implicit grammar during communication with two or more agents.
- Score: 9.862909791015237
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In multi-agent collaboration problems with communication, an agent's ability
to encode their intention and interpret other agents' strategies is critical
for planning their future actions. This paper introduces a novel algorithm
called Intention Embedded Communication (IEC) to mimic an agent's language
learning ability. IEC contains a perception module for decoding other agents'
intentions in response to their past actions. It also includes a language
generation module for learning implicit grammar during communication with two
or more agents. Such grammar, by construction, should be compact for efficient
communication. Both modules undergo conjoint evolution - similar to an infant's
babbling that enables it to learn a language of choice by trial and error. We
utilised three multi-agent environments, namely predator/prey, traffic junction
and level-based foraging and illustrate that such a co-evolution enables us to
learn much quicker (50%) than state-of-the-art algorithms like MADDPG. Ablation
studies further show that disabling the inferring belief module, communication
module, and the hidden states reduces the model performance by 38%, 60% and
30%, respectively. Hence, we suggest that modelling other agents' behaviour
accelerates another agent to learn grammar and develop a language to
communicate efficiently. We evaluate our method on a set of cooperative
scenarios and show its superior performance to other multi-agent baselines. We
also demonstrate that it is essential for agents to reason about others' states
and learn this ability by continuous communication.
Related papers
- Verco: Learning Coordinated Verbal Communication for Multi-agent Reinforcement Learning [42.27106057372819]
We propose a novel multi-agent reinforcement learning algorithm that embeds large language models into agents.
The framework has a message module and an action module.
Experiments conducted on the Overcooked game demonstrate our method significantly enhances the learning efficiency and performance of existing methods.
arXiv Detail & Related papers (2024-04-27T05:10:33Z) - ChatDev: Communicative Agents for Software Development [84.90400377131962]
ChatDev is a chat-powered software development framework in which specialized agents are guided in what to communicate.
These agents actively contribute to the design, coding, and testing phases through unified language-based communication.
arXiv Detail & Related papers (2023-07-16T02:11:34Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Learning Emergent Discrete Message Communication for Cooperative
Reinforcement Learning [36.468498804251574]
We show that discrete message communication has performance comparable to continuous message communication.
We propose an approach that allows humans to interactively send discrete messages to agents.
arXiv Detail & Related papers (2021-02-24T20:44:14Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z) - Networked Multi-Agent Reinforcement Learning with Emergent Communication [18.47483427884452]
Multi-Agent Reinforcement Learning (MARL) methods find optimal policies for agents that operate in the presence of other learning agents.
One way to coordinate is by learning to communicate with each other.
Can the agents develop a language while learning to perform a common task?
arXiv Detail & Related papers (2020-04-06T16:13:23Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.