Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control
- URL: http://arxiv.org/abs/2310.02435v1
- Date: Tue, 3 Oct 2023 21:06:51 GMT
- Title: Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control
- Authors: Rohit Bokade, Xiaoning Jin, Christopher Amato
- Abstract summary: Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
- Score: 13.844458247041711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic signal control (TSC) is a challenging problem within intelligent
transportation systems and has been tackled using multi-agent reinforcement
learning (MARL). While centralized approaches are often infeasible for
large-scale TSC problems, decentralized approaches provide scalability but
introduce new challenges, such as partial observability. Communication plays a
critical role in decentralized MARL, as agents must learn to exchange
information using messages to better understand the system and achieve
effective coordination. Deep MARL has been used to enable inter-agent
communication by learning communication protocols in a differentiable manner.
However, many deep MARL communication frameworks proposed for TSC allow agents
to communicate with all other agents at all times, which can add to the
existing noise in the system and degrade overall performance. In this study, we
propose a communication-based MARL framework for large-scale TSC. Our framework
allows each agent to learn a communication policy that dictates "which" part of
the message is sent "to whom". In essence, our framework enables agents to
selectively choose the recipients of their messages and exchange variable
length messages with them. This results in a decentralized and flexible
communication mechanism in which agents can effectively use the communication
channel only when necessary. We designed two networks, a synthetic $4 \times 4$
grid network and a real-world network based on the Pasubio neighborhood in
Bologna. Our framework achieved the lowest network congestion compared to
related methods, with agents utilizing $\sim 47-65 \%$ of the communication
channel. Ablation studies further demonstrated the effectiveness of the
communication policies learned within our framework.
Related papers
- Context-aware Communication for Multi-agent Reinforcement Learning [6.109127175562235]
We develop a context-aware communication scheme for multi-agent reinforcement learning (MARL)
In the first stage, agents exchange coarse representations in a broadcast fashion, providing context for the second stage.
Following this, agents utilize attention mechanisms in the second stage to selectively generate messages personalized for the receivers.
To evaluate the effectiveness of CACOM, we integrate it with both actor-critic and value-based MARL algorithms.
arXiv Detail & Related papers (2023-12-25T03:33:08Z) - Large Language Model Enhanced Multi-Agent Systems for 6G Communications [94.45712802626794]
We propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language.
We validate the effectiveness of the proposed multi-agent system by designing a semantic communication system.
arXiv Detail & Related papers (2023-12-13T02:35:57Z) - AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent
Reinforcement Learning [4.884877440051105]
We propose a novel communication protocol called Adaptively Controlled Two-Hop Communication (AC2C)
AC2C employs an adaptive two-hop communication strategy to enable long-range information exchange among agents to boost performance.
We evaluate AC2C on three cooperative multi-agent tasks, and the experimental results show that it outperforms relevant baselines with lower communication costs.
arXiv Detail & Related papers (2023-02-24T09:00:34Z) - Scalable Communication for Multi-Agent Reinforcement Learning via
Transformer-Based Email Mechanism [9.607941773452925]
Communication can impressively improve cooperation in multi-agent reinforcement learning (MARL)
We propose a novel framework Transformer-based Email Mechanism (TEM) to tackle the scalability problem of MARL communication for partially-observed tasks.
arXiv Detail & Related papers (2023-01-05T05:34:30Z) - Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems [51.6210785955659]
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
arXiv Detail & Related papers (2022-06-21T07:32:18Z) - Coordinating Policies Among Multiple Agents via an Intelligent
Communication Channel [81.39444892747512]
In Multi-Agent Reinforcement Learning (MARL), specialized channels are often introduced that allow agents to communicate directly with one another.
We propose an alternative approach whereby agents communicate through an intelligent facilitator that learns to sift through and interpret signals provided by all agents to improve the agents' collective performance.
arXiv Detail & Related papers (2022-05-21T14:11:33Z) - FCMNet: Full Communication Memory Net for Team-Level Cooperation in
Multi-Agent Systems [15.631744703803806]
We introduce FCMNet, a reinforcement learning based approach that allows agents to simultaneously learn an effective multi-hop communications protocol.
Using a simple multi-hop topology, we endow each agent with the ability to receive information sequentially encoded by every other agent at each time step.
FCMNet outperforms state-of-the-art communication-based reinforcement learning methods in all StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2022-01-28T09:12:01Z) - Multi-agent Communication with Graph Information Bottleneck under
Limited Bandwidth (a position paper) [92.11330289225981]
In many real-world scenarios, communication can be expensive and the bandwidth of the multi-agent system is subject to certain constraints.
Redundant messages who occupy the communication resources can block the transmission of informative messages and thus jeopardize the performance.
We propose a novel multi-agent communication module, CommGIB, which effectively compresses the structure information and node information in the communication graph to deal with bandwidth-constrained settings.
arXiv Detail & Related papers (2021-12-20T07:53:44Z) - Effective Communications: A Joint Learning and Communication Framework
for Multi-Agent Reinforcement Learning over Noisy Channels [0.0]
We propose a novel formulation of the "effectiveness problem" in communications.
We consider multiple agents communicating over a noisy channel in order to achieve better coordination and cooperation.
We show via examples that the joint policy learned using the proposed framework is superior to that where the communication is considered separately.
arXiv Detail & Related papers (2021-01-02T10:43:41Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.