A Survey of Multi-Agent Reinforcement Learning with Communication
- URL: http://arxiv.org/abs/2203.08975v1
- Date: Wed, 16 Mar 2022 22:39:46 GMT
- Title: A Survey of Multi-Agent Reinforcement Learning with Communication
- Authors: Changxi Zhu, Mehdi Dastani, Shihan Wang
- Abstract summary: Communication is an effective mechanism for coordinating the behavior of multiple agents.
There is lack of a systematic and structural approach to distinguish and classify existing Comm-MARL systems.
- Score: 1.7820563504030822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication is an effective mechanism for coordinating the behavior of
multiple agents. In the field of multi-agent reinforcement learning, agents can
improve the overall learning performance and achieve their objectives by
communication. Moreover, agents can communicate various types of messages,
either to all agents or to specific agent groups, and through specific
channels. With the growing body of research work in MARL with communication
(Comm-MARL), there is lack of a systematic and structural approach to
distinguish and classify existing Comm-MARL systems. In this paper, we survey
recent works in the Comm-MARL field and consider various aspects of
communication that can play a role in the design and development of multi-agent
reinforcement learning systems. With these aspects in mind, we propose several
dimensions along which Comm-MARL systems can be analyzed, developed, and
compared.
Related papers
- Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Fully Independent Communication in Multi-Agent Reinforcement Learning [4.470370168359807]
Multi-Agent Reinforcement Learning (MARL) comprises a broad area of research within the field of multi-agent systems.
We investigate how independent learners in MARL that do not share parameters can communicate.
Our results show that, despite the challenges, independent agents can still learn communication strategies following our method.
arXiv Detail & Related papers (2024-01-26T18:42:01Z) - Large Language Model Enhanced Multi-Agent Systems for 6G Communications [94.45712802626794]
We propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language.
We validate the effectiveness of the proposed multi-agent system by designing a semantic communication system.
arXiv Detail & Related papers (2023-12-13T02:35:57Z) - Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control [13.844458247041711]
Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
arXiv Detail & Related papers (2023-10-03T21:06:51Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Scalable Communication for Multi-Agent Reinforcement Learning via
Transformer-Based Email Mechanism [9.607941773452925]
Communication can impressively improve cooperation in multi-agent reinforcement learning (MARL)
We propose a novel framework Transformer-based Email Mechanism (TEM) to tackle the scalability problem of MARL communication for partially-observed tasks.
arXiv Detail & Related papers (2023-01-05T05:34:30Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - The Emergence of Adversarial Communication in Multi-Agent Reinforcement
Learning [6.18778092044887]
Many real-world problems require the coordination of multiple autonomous agents.
Recent work has shown the promise of Graph Neural Networks (GNNs) to learn explicit communication strategies that enable complex multi-agent coordination.
We show how a single self-interested agent is capable of learning highly manipulative communication strategies that allows it to significantly outperform a cooperative team of agents.
arXiv Detail & Related papers (2020-08-06T12:48:08Z) - Counterfactual Multi-Agent Reinforcement Learning with Graph Convolution
Communication [5.5438676149999075]
We consider a fully cooperative multi-agent system where agents cooperate to maximize a system's utility.
We propose that multi-agent systems must have the ability to communicate and understand the inter-plays between agents.
We develop an architecture that allows for communication among agents and tailors the system's reward for each individual agent.
arXiv Detail & Related papers (2020-04-01T14:36:13Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.