Scalable Multi-Agent Reinforcement Learning for Networked Systems with
Average Reward
- URL: http://arxiv.org/abs/2006.06626v1
- Date: Thu, 11 Jun 2020 17:23:17 GMT
- Title: Scalable Multi-Agent Reinforcement Learning for Networked Systems with
Average Reward
- Authors: Guannan Qu, Yiheng Lin, Adam Wierman, Na Li
- Abstract summary: It has long been recognized that multi-agent reinforcement learning (MARL) faces significant scalability issues.
In this paper, we identify a rich class of networked MARL problems where the model exhibits a local dependence structure that allows it to be solved in a scalable manner.
- Score: 17.925681736096482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has long been recognized that multi-agent reinforcement learning (MARL)
faces significant scalability issues due to the fact that the size of the state
and action spaces are exponentially large in the number of agents. In this
paper, we identify a rich class of networked MARL problems where the model
exhibits a local dependence structure that allows it to be solved in a scalable
manner. Specifically, we propose a Scalable Actor-Critic (SAC) method that can
learn a near optimal localized policy for optimizing the average reward with
complexity scaling with the state-action space size of local neighborhoods, as
opposed to the entire network. Our result centers around identifying and
exploiting an exponential decay property that ensures the effect of agents on
each other decays exponentially fast in their graph distance.
Related papers
- Scalable spectral representations for network multiagent control [53.631272539560435]
A popular model for multi-agent control, Network Markov Decision Processes (MDPs) pose a significant challenge to efficient learning.
We first derive scalable spectral local representations for network MDPs, which induces a network linear subspace for the local $Q$-function of each agent.
We design a scalable algorithmic framework for continuous state-action network MDPs, and provide end-to-end guarantees for the convergence of our algorithm.
arXiv Detail & Related papers (2024-10-22T17:45:45Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Scalable Multi-agent Covering Option Discovery based on Kronecker Graphs [49.71319907864573]
In this paper, we propose multi-agent skill discovery which enables the ease of decomposition.
Our key idea is to approximate the joint state space as a Kronecker graph, based on which we can directly estimate its Fiedler vector.
Considering that directly computing the Laplacian spectrum is intractable for tasks with infinite-scale state spaces, we further propose a deep learning extension of our method.
arXiv Detail & Related papers (2023-07-21T14:53:12Z) - SEA: A Spatially Explicit Architecture for Multi-Agent Reinforcement
Learning [14.935456456463731]
We propose a spatial information extraction structure for multi-agent reinforcement learning.
Agents can effectively share the neighborhood and global information through a spatially encoder-decoder structure.
arXiv Detail & Related papers (2023-04-25T03:00:09Z) - Common Information based Approximate State Representations in
Multi-Agent Reinforcement Learning [3.086462790971422]
We develop a general compression framework with approximate common and private state representations, based on which decentralized policies can be constructed.
The results shed light on designing practically useful deep-MARL network structures under the "centralized learning distributed execution" scheme.
arXiv Detail & Related papers (2021-10-25T02:32:06Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Sequential Hierarchical Learning with Distribution Transformation for
Image Super-Resolution [83.70890515772456]
We build a sequential hierarchical learning super-resolution network (SHSR) for effective image SR.
We consider the inter-scale correlations of features, and devise a sequential multi-scale block (SMB) to progressively explore the hierarchical information.
Experiment results show SHSR achieves superior quantitative performance and visual quality to state-of-the-art methods.
arXiv Detail & Related papers (2020-07-19T01:35:53Z) - Multi-Agent Reinforcement Learning in Stochastic Networked Systems [30.78949372661673]
We study multi-agent reinforcement learning (MARL) in a network of agents.
The objective is to find localized policies that maximize the (discounted) global reward.
arXiv Detail & Related papers (2020-06-11T16:08:16Z) - Crowd Counting via Hierarchical Scale Recalibration Network [61.09833400167511]
We propose a novel Hierarchical Scale Recalibration Network (HSRNet) to tackle the task of crowd counting.
HSRNet models rich contextual dependencies and recalibrating multiple scale-associated information.
Our approach can ignore various noises selectively and focus on appropriate crowd scales automatically.
arXiv Detail & Related papers (2020-03-07T10:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.