A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning
- URL: http://arxiv.org/abs/2010.07777v1
- Date: Thu, 15 Oct 2020 14:12:26 GMT
- Title: A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning
- Authors: Arnu Pretorius, Scott Cameron, Elan van Biljon, Tom Makkink, Shahil
Mawjee, Jeremy du Plessis, Jonathan Shock, Alexandre Laterre, Karim Beguir
- Abstract summary: Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
- Score: 54.55119659523629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent reinforcement learning has recently shown great promise as an
approach to networked system control. Arguably, one of the most difficult and
important tasks for which large scale networked system control is applicable is
common-pool resource management. Crucial common-pool resources include arable
land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere,
of which proper management is related to some of society's greatest challenges
such as food security, inequality and climate change. Here we take inspiration
from a recent research program investigating the game-theoretic incentives of
humans in social dilemma situations such as the well-known tragedy of the
commons. However, instead of focusing on biologically evolved human-like
agents, our concern is rather to better understand the learning and operating
behaviour of engineered networked systems comprising general-purpose
reinforcement learning agents, subject only to nonbiological constraints such
as memory, computation and communication bandwidth. Harnessing tools from
empirical game-theoretic analysis, we analyse the differences in resulting
solution concepts that stem from employing different information structures in
the design of networked multi-agent systems. These information structures
pertain to the type of information shared between agents as well as the
employed communication protocol and network topology. Our analysis contributes
new insights into the consequences associated with certain design choices and
provides an additional dimension of comparison between systems beyond
efficiency, robustness, scalability and mean control performance.
Related papers
- Multi-Agent Reinforcement Learning for Power Control in Wireless
Networks via Adaptive Graphs [1.1861167902268832]
Multi-agent deep reinforcement learning (MADRL) has emerged as a promising method to address a wide range of complex optimization problems like power control.
We present the use of graphs as communication-inducing structures among distributed agents as an effective means to mitigate these challenges.
arXiv Detail & Related papers (2023-11-27T14:25:40Z) - Anomaly Detection in Multiplex Dynamic Networks: from Blockchain
Security to Brain Disease Prediction [0.0]
ANOMULY is an unsupervised edge anomaly detection framework for multiplex dynamic networks.
We show how ANOMULY could be employed as a new tool to understand abnormal brain activity that might reveal a brain disease or disorder.
arXiv Detail & Related papers (2022-11-15T18:25:40Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - A Survey on Large-Population Systems and Scalable Multi-Agent
Reinforcement Learning [18.918558716102144]
We will shed light on current approaches to tractably understanding and analyzing large-population systems.
We will survey potential areas of application for large-scale control and identify fruitful future applications of learning algorithms in practical systems.
arXiv Detail & Related papers (2022-09-08T14:58:50Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Modelling Cooperation in Network Games with Spatio-Temporal Complexity [11.665246332943058]
We study the emergence of self-organized cooperation in complex gridworld domains.
Using multi-agent deep reinforcement learning, we simulate an agent society for a variety of plausible mechanisms.
Our methods have implications for mechanism design in both human and artificial agent systems.
arXiv Detail & Related papers (2021-02-13T12:04:52Z) - Automated Search for Resource-Efficient Branched Multi-Task Networks [81.48051635183916]
We propose a principled approach, rooted in differentiable neural architecture search, to automatically define branching structures in a multi-task neural network.
We show that our approach consistently finds high-performing branching structures within limited resource budgets.
arXiv Detail & Related papers (2020-08-24T09:49:19Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Multivariate Relations Aggregation Learning in Social Networks [39.576490107740135]
In graph learning tasks of social networks, the identification and utilization of multivariate relationship information are more important.
Existing graph learning methods are based on the neighborhood information diffusion mechanism.
This paper proposes the multivariate relationship aggregation learning (MORE) method, which can effectively capture the multivariate relationship information in the network environment.
arXiv Detail & Related papers (2020-08-09T04:58:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.