DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement
Learning
- URL: http://arxiv.org/abs/2212.01619v1
- Date: Sat, 3 Dec 2022 14:20:59 GMT
- Title: DACOM: Learning Delay-Aware Communication for Multi-Agent Reinforcement
Learning
- Authors: Tingting Yuan, Hwei-Ming Chung, Jie Yuan, Xiaoming Fu
- Abstract summary: We show that ignoring communication delays has detrimental effects on collaborations.
We design a delay-aware multi-agent communication model (DACOM) to adapt communication to delays.
Our experiments reveal that DACOM has a non-negligible performance improvement over other mechanisms.
- Score: 19.36041216505116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication is supposed to improve multi-agent collaboration and overall
performance in cooperative Multi-agent reinforcement learning (MARL). However,
such improvements are prevalently limited in practice since most existing
communication schemes ignore communication overheads (e.g., communication
delays). In this paper, we demonstrate that ignoring communication delays has
detrimental effects on collaborations, especially in delay-sensitive tasks such
as autonomous driving. To mitigate this impact, we design a delay-aware
multi-agent communication model (DACOM) to adapt communication to delays.
Specifically, DACOM introduces a component, TimeNet, that is responsible for
adjusting the waiting time of an agent to receive messages from other agents
such that the uncertainty associated with delay can be addressed. Our
experiments reveal that DACOM has a non-negligible performance improvement over
other mechanisms by making a better trade-off between the benefits of
communication and the costs of waiting for messages.
Related papers
- Cooperation Breakdown in LLM Agents Under Communication Delays [3.619444603816032]
We propose the FLCOA framework to conceptualize how cooperation and coordination emerge in groups of autonomous agents.<n>To examine the effect of communication delay, we introduce a Continuous Prisoner's Dilemma with Communication Delay.<n>We find that excessive delay reduces cycles of exploitation, yielding a U-shaped relationship between delay magnitude and mutual cooperation.
arXiv Detail & Related papers (2026-02-12T09:31:47Z) - A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures [59.43633341497526]
Large-Language-Model-driven AI agents have exhibited unprecedented intelligence and adaptability.<n>Agent communication is regarded as a foundational pillar of the future AI ecosystem.<n>This paper presents a comprehensive survey of agent communication security.
arXiv Detail & Related papers (2025-06-24T14:44:28Z) - CoDe: Communication Delay-Tolerant Multi-Agent Collaboration via Dual Alignment of Intent and Timeliness [21.627120541083553]
This paper proposes a novel framework, Communication Delay-tolerant Multi-Agent Collaboration (CoDe)
At first, CoDe learns an intent representation as messages through future action inference.
Then, CoDe devises a dual alignment mechanism of intent and timeliness to strengthen the fusion process of asynchronous messages.
arXiv Detail & Related papers (2025-01-09T12:57:41Z) - AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent
Reinforcement Learning [4.884877440051105]
We propose a novel communication protocol called Adaptively Controlled Two-Hop Communication (AC2C)
AC2C employs an adaptive two-hop communication strategy to enable long-range information exchange among agents to boost performance.
We evaluate AC2C on three cooperative multi-agent tasks, and the experimental results show that it outperforms relevant baselines with lower communication costs.
arXiv Detail & Related papers (2023-02-24T09:00:34Z) - Scalable Communication for Multi-Agent Reinforcement Learning via
Transformer-Based Email Mechanism [9.607941773452925]
Communication can impressively improve cooperation in multi-agent reinforcement learning (MARL)
We propose a novel framework Transformer-based Email Mechanism (TEM) to tackle the scalability problem of MARL communication for partially-observed tasks.
arXiv Detail & Related papers (2023-01-05T05:34:30Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - Communication-Efficient Split Learning Based on Analog Communication and
Over the Air Aggregation [48.150466900765316]
Split-learning (SL) has recently gained popularity due to its inherent privacy-preserving capabilities and ability to enable collaborative inference for devices with limited computational power.
Standard SL algorithms assume an ideal underlying digital communication system and ignore the problem of scarce communication bandwidth.
We propose a novel SL framework to solve the remote inference problem that introduces an additional layer at the agent side and constrains the choices of the weights and the biases to ensure over the air aggregation.
arXiv Detail & Related papers (2021-06-02T07:49:41Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - Succinct and Robust Multi-Agent Communication With Temporal Message
Control [17.55163940659976]
Existing communication schemes require agents to exchange an excessive number of messages at run-time.
We present textitTemporal Message Control (TMC), a simple yet effective approach for achieving succinct and robust communication.
arXiv Detail & Related papers (2020-10-27T15:55:08Z) - Learning Individually Inferred Communication for Multi-Agent Cooperation [37.56115000150748]
We propose Individually Inferred Communication (I2C) to enable agents to learn a prior for agent-agent communication.
The prior knowledge is learned via causal inference and realized by a feed-forward neural network.
I2C can not only reduce communication overhead but also improve the performance in a variety of multi-agent cooperative scenarios.
arXiv Detail & Related papers (2020-06-11T14:07:57Z) - Delay-Aware Multi-Agent Reinforcement Learning for Cooperative and
Competitive Environments [23.301322095357808]
Action and observation delays exist prevalently in the real-world cyber-physical systems.
This paper proposes a novel framework to deal with delays as well as the non-stationary training issue of multi-agent tasks.
Experiments are conducted in multi-agent particle environments including cooperative communication, cooperative navigation, and competitive experiments.
arXiv Detail & Related papers (2020-05-11T21:21:50Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.