A Survey of Multi-Agent Reinforcement Learning with Communication
- URL: http://arxiv.org/abs/2203.08975v1
- Date: Wed, 16 Mar 2022 22:39:46 GMT
- Title: A Survey of Multi-Agent Reinforcement Learning with Communication
- Authors: Changxi Zhu, Mehdi Dastani, Shihan Wang
- Abstract summary: Communication is an effective mechanism for coordinating the behavior of multiple agents.
There is lack of a systematic and structural approach to distinguish and classify existing Comm-MARL systems.
- Score: 1.7820563504030822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication is an effective mechanism for coordinating the behavior of
multiple agents. In the field of multi-agent reinforcement learning, agents can
improve the overall learning performance and achieve their objectives by
communication. Moreover, agents can communicate various types of messages,
either to all agents or to specific agent groups, and through specific
channels. With the growing body of research work in MARL with communication
(Comm-MARL), there is lack of a systematic and structural approach to
distinguish and classify existing Comm-MARL systems. In this paper, we survey
recent works in the Comm-MARL field and consider various aspects of
communication that can play a role in the design and development of multi-agent
reinforcement learning systems. With these aspects in mind, we propose several
dimensions along which Comm-MARL systems can be analyzed, developed, and
compared.
Related papers
- A Survey of AI Agent Protocols [35.431057321412354]
There is no standard way for large language models (LLMs) agents to communicate with external tools or data sources.
This lack of standardized protocols makes it difficult for agents to work together or scale effectively.
A unified communication protocol for LLM agents could change this.
arXiv Detail & Related papers (2025-04-23T14:07:26Z) - Large Language Model Agent: A Survey on Methodology, Applications and Challenges [88.3032929492409]
Large Language Model (LLM) agents, with goal-driven behaviors and dynamic adaptation capabilities, potentially represent a critical pathway toward artificial general intelligence.
This survey systematically deconstructs LLM agent systems through a methodology-centered taxonomy.
Our work provides a unified architectural perspective, examining how agents are constructed, how they collaborate, and how they evolve over time.
arXiv Detail & Related papers (2025-03-27T12:50:17Z) - Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems [11.522282769053817]
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in reasoning, planning, and decision-making.
Researchers have begun incorporating LLMs into multi-agent systems to tackle tasks beyond the scope of single-agent setups.
This survey serves as a catalyst for further innovation, fostering more robust, scalable, and intelligent multi-agent systems.
arXiv Detail & Related papers (2025-02-20T07:18:34Z) - Contextual Knowledge Sharing in Multi-Agent Reinforcement Learning with Decentralized Communication and Coordination [0.9776703963093367]
Multi-Agent Reinforcement Learning (Dec-MARL) has emerged as a pivotal approach for addressing complex tasks in dynamic environments.
This paper presents a novel Dec-MARL framework that integrates peer-to-peer communication and coordination, incorporating goal-awareness and time-awareness into the agents' knowledge-sharing processes.
arXiv Detail & Related papers (2025-01-26T22:49:50Z) - Multi-Agent Collaboration Mechanisms: A Survey of LLMs [6.545098975181273]
Multi-Agent Systems (MASs) enable groups of intelligent agents to coordinate and solve complex tasks collectively.
This work provides an extensive survey of the collaborative aspect of MASs and introduces a framework to guide future research.
arXiv Detail & Related papers (2025-01-10T19:56:50Z) - Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Improving Multi-Agent Debate with Sparse Communication Topology [9.041025703879905]
Multi-agent debate has proven effective in improving large language models quality for reasoning and factuality tasks.
In this paper, we investigate the effect of communication connectivity in multi-agent systems.
Our experiments on GPT and Mistral models reveal that multi-agent debates leveraging sparse communication topology can achieve comparable or superior performance.
arXiv Detail & Related papers (2024-06-17T17:33:09Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Large Language Model Enhanced Multi-Agent Systems for 6G Communications [94.45712802626794]
We propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language.
We validate the effectiveness of the proposed multi-agent system by designing a semantic communication system.
arXiv Detail & Related papers (2023-12-13T02:35:57Z) - A Review of Cooperation in Multi-agent Learning [5.334450724000142]
Cooperation in multi-agent learning (MAL) is a topic at the intersection of numerous disciplines.
This paper provides an overview of the fundamental concepts, problem settings and algorithms of multi-agent learning.
arXiv Detail & Related papers (2023-12-08T16:42:15Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Meta-CPR: Generalize to Unseen Large Number of Agents with Communication
Pattern Recognition Module [29.75594940509839]
We formulate a multi-agent environment with a different number of agents as a multi-tasking problem.
We propose a meta reinforcement learning (meta-RL) framework to tackle this problem.
The proposed framework employs a meta-learned Communication Pattern Recognition (CPR) module to identify communication behavior.
arXiv Detail & Related papers (2021-12-14T08:23:04Z) - Provably Efficient Cooperative Multi-Agent Reinforcement Learning with
Function Approximation [15.411902255359074]
We show that it is possible to achieve near-optimal no-regret learning even with a fixed constant communication budget.
Our work generalizes several ideas from the multi-agent contextual and multi-armed bandit literature to MDPs and reinforcement learning.
arXiv Detail & Related papers (2021-03-08T18:51:00Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.