Robust and Efficient Communication in Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2511.11393v1
- Date: Fri, 14 Nov 2025 15:23:11 GMT
- Title: Robust and Efficient Communication in Multi-Agent Reinforcement Learning
- Authors: Zejiao Liu, Yi Li, Jiali Wang, Junqi Tu, Yitian Hong, Fangfei Li, Yang Liu, Toshiharu Sugawara, Yang Tang,
- Abstract summary: Multi-agent reinforcement learning (MARL) has made significant strides in enabling coordinated behaviors among autonomous agents.<n>Most existing approaches assume that communication is instantaneous, reliable, and has unlimited bandwidth; these conditions are rarely met in real-world deployments.<n>This survey systematically reviews recent advances in robust and efficient communication strategies for MARL under realistic constraints.
- Score: 18.405707681765453
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-agent reinforcement learning (MARL) has made significant strides in enabling coordinated behaviors among autonomous agents. However, most existing approaches assume that communication is instantaneous, reliable, and has unlimited bandwidth; these conditions are rarely met in real-world deployments. This survey systematically reviews recent advances in robust and efficient communication strategies for MARL under realistic constraints, including message perturbations, transmission delays, and limited bandwidth. Furthermore, because the challenges of low-latency reliability, bandwidth-intensive data sharing, and communication-privacy trade-offs are central to practical MARL systems, we focus on three applications involving cooperative autonomous driving, distributed simultaneous localization and mapping, and federated learning. Finally, we identify key open challenges and future research directions, advocating a unified approach that co-designs communication, learning, and robustness to bridge the gap between theoretical MARL models and practical implementations.
Related papers
- IMAGINE: Intelligent Multi-Agent Godot-based Indoor Networked Exploration [0.0]
This paper implements Multi-Agent Reinforcement Learning (MARL) to address challenges in a 2D indoor environment.<n>Policy training aims to achieve emergent collaborative behaviours and decision-making under uncertainty.
arXiv Detail & Related papers (2026-02-02T22:08:41Z) - Bandwidth-Efficient Multi-Agent Communication through Information Bottleneck and Vector Quantization [2.5782420501870296]
We present a framework that combines information bottleneck theory with vector quantization to enable selective, bandwidth-efficient communication in multi-agent environments.<n>Our approach learns to compress and discretize communication messages while preserving task-critical information through principled information-theoretic optimization.
arXiv Detail & Related papers (2026-02-02T12:32:28Z) - Communication-Efficient Multi-Modal Edge Inference via Uncertainty-Aware Distributed Learning [60.650628083185616]
We propose a three-stage communication-aware distributed learning framework to improve training and inference efficiency.<n>In StageI, devices perform local multi-modal self-supervised learning to obtain shared and modality-specific encoders without device--server exchange.<n>StageII, distributed fine-tuning with centralized evidential fusion calibrates per-modality uncertainty and reliably aggregates features distorted by noise or channel fading.<n>StageIII, an uncertainty-guided feedback mechanism selectively requests additional features for uncertain samples, optimizing the communication--accuracy tradeoff in the distributed setting.
arXiv Detail & Related papers (2026-01-21T12:38:02Z) - Communication Methods in Multi-Agent Reinforcement Learning [0.0]
This paper provides an overview of communication techniques in multi-agent reinforcement learning.<n>By an in-depth analysis of 29 publications on this topic, the strengths and weaknesses of explicit, implicit, attention-based, graph-based, and hierarchical/role-based communication are evaluated.
arXiv Detail & Related papers (2026-01-19T09:39:00Z) - Multi-Agent Reinforcement Learning with Communication-Constrained Priors [22.124940712335434]
Communication is one of the effective means to improve the learning of cooperative policy in multi-agent systems.<n>Existing multi-agent reinforcement learning with communication struggles to apply to complex and dynamic real-world environments.<n>We introduce a communication-constrained multi-agent reinforcement learning framework, quantifying the impact of communication messages into the global reward.
arXiv Detail & Related papers (2025-12-03T07:35:07Z) - Learning to Interact in World Latent for Team Coordination [53.51290193631586]
This work presents a novel representation learning framework, interactive world latent (IWoL), to facilitate team coordination in multi-agent reinforcement learning (MARL)<n>Our key insight is to construct a learnable representation space that jointly captures inter-agent relations and task-specific world information by directly modeling communication protocols.<n>Our representation can be used not only as an implicit latent for each agent, but also as an explicit message for communication.
arXiv Detail & Related papers (2025-09-29T22:13:39Z) - Multi-Agent Reinforcement Learning in Intelligent Transportation Systems: A Comprehensive Survey [1.8899300124593648]
This paper presents a survey of Multi Agent Reinforcement Learning (MARL) applications in ITS.<n>MARL offers a promising paradigm for addressing these challenges by enabling distributed agents to jointly learn optimal strategies.<n>We introduce a structured taxonomy that categorizes MARL approaches according to coordination models and learning algorithms, spanning value based, policy based, actor critic, and communication enhanced frameworks.
arXiv Detail & Related papers (2025-08-27T23:04:34Z) - Multi-Modal Self-Supervised Semantic Communication [52.76990720898666]
We propose a multi-modal semantic communication system that leverages multi-modal self-supervised learning to enhance task-agnostic feature extraction.<n>The proposed approach effectively captures both modality-invariant and modality-specific features while minimizing training-related communication overhead.<n>The findings underscore the advantages of multi-modal self-supervised learning in semantic communication, paving the way for more efficient and scalable edge inference systems.
arXiv Detail & Related papers (2025-03-18T06:13:02Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Fully Independent Communication in Multi-Agent Reinforcement Learning [4.470370168359807]
Multi-Agent Reinforcement Learning (MARL) comprises a broad area of research within the field of multi-agent systems.
We investigate how independent learners in MARL that do not share parameters can communicate.
Our results show that, despite the challenges, independent agents can still learn communication strategies following our method.
arXiv Detail & Related papers (2024-01-26T18:42:01Z) - Collaborative Information Dissemination with Graph-based Multi-Agent
Reinforcement Learning [2.9904113489777826]
This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach for efficient information dissemination.
We propose a Partially Observable Game (POSG) for information dissemination empowering each agent to decide on message forwarding independently.
Our experimental results show that our trained policies outperform existing methods.
arXiv Detail & Related papers (2023-08-25T21:30:16Z) - Multi-Agent Adversarial Attacks for Multi-Channel Communications [24.576538640840976]
We propose a multi-agent adversary system (MAAS) for modeling and analyzing adversaries in a wireless communication scenario.
By modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy.
arXiv Detail & Related papers (2022-01-22T23:57:00Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Scalable Multi-Agent Reinforcement Learning for Residential Load Scheduling under Data Governance [5.37556626581816]
Multi-agent reinforcement learning (MARL) has made remarkable advances in solving cooperative residential load scheduling problems.<n> centralized training, the most common paradigm for MARL, limits large-scale deployment in communication-constrained cloud-edge environments.<n>Our proposed approach is based on actor-critic methods, where the global critic is a learned function of individual critics computed solely based on local observations of households.
arXiv Detail & Related papers (2021-10-06T14:05:26Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.