Multi-agent Reinforcement Learning for Networked System Control
- URL: http://arxiv.org/abs/2004.01339v2
- Date: Fri, 24 Apr 2020 01:54:46 GMT
- Title: Multi-agent Reinforcement Learning for Networked System Control
- Authors: Tianshu Chu, Sandeep Chinchali, Sachin Katti
- Abstract summary: This paper considers multi-agent reinforcement learning (MARL) in networked system control.
We propose a new different communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL.
NeurComm outperforms existing communication protocols in both learning efficiency and control performance.
- Score: 6.89105475513757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers multi-agent reinforcement learning (MARL) in networked
system control. Specifically, each agent learns a decentralized control policy
based on local observations and messages from connected neighbors. We formulate
such a networked MARL (NMARL) problem as a spatiotemporal Markov decision
process and introduce a spatial discount factor to stabilize the training of
each local agent. Further, we propose a new differentiable communication
protocol, called NeurComm, to reduce information loss and non-stationarity in
NMARL. Based on experiments in realistic NMARL scenarios of adaptive traffic
signal control and cooperative adaptive cruise control, an appropriate spatial
discount factor effectively enhances the learning curves of non-communicative
MARL algorithms, while NeurComm outperforms existing communication protocols in
both learning efficiency and control performance.
Related papers
- Learning Decentralized Traffic Signal Controllers with Multi-Agent Graph
Reinforcement Learning [42.175067773481416]
We design a new decentralized control architecture with improved environmental observability to capture the spatial-temporal correlation.
Specifically, we first develop a topology-aware information aggregation strategy to extract correlation-related information from unstructured data gathered in the road network.
A diffusion convolution module is developed, forming a new MARL algorithm, which endows agents with the capabilities of graph learning.
arXiv Detail & Related papers (2023-11-07T06:43:15Z) - Combat Urban Congestion via Collaboration: Heterogeneous GNN-based MARL
for Coordinated Platooning and Traffic Signal Control [16.762073265205565]
This paper proposes an innovative solution to tackle these challenges based on heterogeneous graph multi-agent reinforcement learning and traffic theories.
Our approach involves: 1) designing platoon and signal control as distinct reinforcement learning agents with their own set of observations, actions, and reward functions to optimize traffic flow; 2) designing coordination by incorporating graph neural networks within multi-agent reinforcement learning to facilitate seamless information exchange among agents on a regional scale.
arXiv Detail & Related papers (2023-10-17T02:46:04Z) - Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control [13.844458247041711]
Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
arXiv Detail & Related papers (2023-10-03T21:06:51Z) - Perimeter Control with Heterogeneous Metering Rates for Cordon Signals: A Physics-Regularized Multi-Agent Reinforcement Learning Approach [12.86346901414289]
Perimeter Control (PC) strategies have been proposed to address urban road network control in oversaturated situations.
This paper leverages a Multi-Agent Reinforcement Learning (MARL)-based traffic signal control framework to decompose this PC problem.
A physics regularization approach for the MARL framework is proposed to ensure the distributed cordon signal controllers are aware of the global network state.
arXiv Detail & Related papers (2023-08-24T13:51:16Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - Depthwise Convolution for Multi-Agent Communication with Enhanced
Mean-Field Approximation [9.854975702211165]
We propose a new method based on local communication learning to tackle the multi-agent RL (MARL) challenge.
First, we design a new communication protocol that exploits the ability of depthwise convolution to efficiently extract local relations.
Second, we introduce the mean-field approximation into our method to reduce the scale of agent interactions.
arXiv Detail & Related papers (2022-03-06T07:42:43Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Communication Efficient Distributed Learning with Censored, Quantized,
and Generalized Group ADMM [52.12831959365598]
We propose a communication-efficiently decentralized machine learning framework that solves a consensus optimization problem defined over a network of inter-connected workers.
The proposed algorithm, Censored and Quantized Generalized GADMM, leverages the worker grouping and decentralized learning ideas of Group Alternating Direction Method of Multipliers (GADMM)
Numerical simulations corroborate that CQ-GGADMM exhibits higher communication efficiency in terms of the number of communication rounds and transmit energy consumption without compromising the accuracy and convergence speed.
arXiv Detail & Related papers (2020-09-14T14:18:19Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.