On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning
- URL: http://arxiv.org/abs/2302.14276v1
- Date: Tue, 28 Feb 2023 03:23:27 GMT
- Title: On the Role of Emergent Communication for Social Learning in Multi-Agent
Reinforcement Learning
- Authors: Seth Karten, Siva Kailas, Huao Li, Katia Sycara
- Abstract summary: Social learning uses cues from experts to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks.
This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explicit communication among humans is key to coordinating and learning.
Social learning, which uses cues from experts, can greatly benefit from the
usage of explicit communication to align heterogeneous policies, reduce sample
complexity, and solve partially observable tasks. Emergent communication, a
type of explicit communication, studies the creation of an artificial language
to encode a high task-utility message directly from data. However, in most
cases, emergent communication sends insufficiently compressed messages with
little or null information, which also may not be understandable to a
third-party listener. This paper proposes an unsupervised method based on the
information bottleneck to capture both referential complexity and task-specific
utility to adequately explore sparse social communication scenarios in
multi-agent reinforcement learning (MARL). We show that our model is able to i)
develop a natural-language-inspired lexicon of messages that is independently
composed of a set of emergent concepts, which span the observations and intents
with minimal bits, ii) develop communication to align the action policies of
heterogeneous agents with dissimilar feature models, and iii) learn a
communication policy from watching an expert's action policy, which we term
`social shadowing'.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Learning Multi-Agent Communication with Contrastive Learning [3.816854668079928]
We introduce an alternative perspective where communicative messages are considered as different incomplete views of the environment state.
By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning.
In communication-essential environments, our method outperforms previous work in both performance and learning speed.
arXiv Detail & Related papers (2023-07-03T23:51:05Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Cognitive Semantic Communication Systems Driven by Knowledge Graph:
Principle, Implementation, and Performance Evaluation [74.38561925376996]
Two cognitive semantic communication frameworks are proposed for the single-user and multiple-user communication scenarios.
An effective semantic correction algorithm is proposed by mining the inference rule from the knowledge graph.
For the multi-user cognitive semantic communication system, a message recovery algorithm is proposed to distinguish messages of different users.
arXiv Detail & Related papers (2023-03-15T12:01:43Z) - Emergent Quantized Communication [34.31732248872158]
We propose an alternative approach to achieve discrete communication -- quantization of communicated messages.
Message quantization allows us to train the model end-to-end, achieving superior performance in multiple setups.
arXiv Detail & Related papers (2022-11-04T12:39:45Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Learning to Ground Decentralized Multi-Agent Communication with
Contrastive Learning [1.116812194101501]
We introduce an alternative perspective to the communicative messages sent between agents, considering them as different incomplete views of the environment state.
We propose a simple approach to induce the emergence of a common language by maximizing the mutual information between messages of a given trajectory in a self-supervised manner.
arXiv Detail & Related papers (2022-03-07T12:41:32Z) - Curriculum-Driven Multi-Agent Learning and the Role of Implicit
Communication in Teamwork [24.92668968807012]
We propose a curriculum-driven learning strategy for solving difficult multi-agent coordination tasks.
We argue that emergent implicit communication plays a large role in enabling superior levels of coordination.
arXiv Detail & Related papers (2021-06-21T14:54:07Z) - Learning Emergent Discrete Message Communication for Cooperative
Reinforcement Learning [36.468498804251574]
We show that discrete message communication has performance comparable to continuous message communication.
We propose an approach that allows humans to interactively send discrete messages to agents.
arXiv Detail & Related papers (2021-02-24T20:44:14Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.