A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning
- URL: http://arxiv.org/abs/2010.07777v1
- Date: Thu, 15 Oct 2020 14:12:26 GMT
- Title: A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning
- Authors: Arnu Pretorius, Scott Cameron, Elan van Biljon, Tom Makkink, Shahil
Mawjee, Jeremy du Plessis, Jonathan Shock, Alexandre Laterre, Karim Beguir
- Abstract summary: Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
- Score: 54.55119659523629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent reinforcement learning has recently shown great promise as an
approach to networked system control. Arguably, one of the most difficult and
important tasks for which large scale networked system control is applicable is
common-pool resource management. Crucial common-pool resources include arable
land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere,
of which proper management is related to some of society's greatest challenges
such as food security, inequality and climate change. Here we take inspiration
from a recent research program investigating the game-theoretic incentives of
humans in social dilemma situations such as the well-known tragedy of the
commons. However, instead of focusing on biologically evolved human-like
agents, our concern is rather to better understand the learning and operating
behaviour of engineered networked systems comprising general-purpose
reinforcement learning agents, subject only to nonbiological constraints such
as memory, computation and communication bandwidth. Harnessing tools from
empirical game-theoretic analysis, we analyse the differences in resulting
solution concepts that stem from employing different information structures in
the design of networked multi-agent systems. These information structures
pertain to the type of information shared between agents as well as the
employed communication protocol and network topology. Our analysis contributes
new insights into the consequences associated with certain design choices and
provides an additional dimension of comparison between systems beyond
efficiency, robustness, scalability and mean control performance.
Related papers
- Adaptive Network Intervention for Complex Systems: A Hierarchical Graph Reinforcement Learning Approach [0.8287206589886879]
This paper introduces a Hierarchical Graph Reinforcement Learning framework that governs such systems through targeted interventions in the network structure.
Under low social learning, the HGRL manager preserves cooperation, forming robust core-periphery networks dominated by cooperators.
In contrast, high social learning defection accelerates, leading to sparser, chain-like networks.
arXiv Detail & Related papers (2024-10-30T18:59:02Z) - Online Multi-modal Root Cause Analysis [61.94987309148539]
Root Cause Analysis (RCA) is essential for pinpointing the root causes of failures in microservice systems.
Existing online RCA methods handle only single-modal data overlooking, complex interactions in multi-modal systems.
We introduce OCEAN, a novel online multi-modal causal structure learning method for root cause localization.
arXiv Detail & Related papers (2024-10-13T21:47:36Z) - Selective Exploration and Information Gathering in Search and Rescue Using Hierarchical Learning Guided by Natural Language Input [5.522800137785975]
We introduce a system that integrates social interaction via large language models (LLMs) with a hierarchical reinforcement learning (HRL) framework.
The proposed system is designed to translate verbal inputs from human stakeholders into actionable RL insights and adjust its search strategy.
By leveraging human-provided information through LLMs and structuring task execution through HRL, our approach significantly improves the agent's learning efficiency and decision-making process in environments characterised by long horizons and sparse rewards.
arXiv Detail & Related papers (2024-09-20T12:27:47Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - A Survey on Large-Population Systems and Scalable Multi-Agent
Reinforcement Learning [18.918558716102144]
We will shed light on current approaches to tractably understanding and analyzing large-population systems.
We will survey potential areas of application for large-scale control and identify fruitful future applications of learning algorithms in practical systems.
arXiv Detail & Related papers (2022-09-08T14:58:50Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Modelling Cooperation in Network Games with Spatio-Temporal Complexity [11.665246332943058]
We study the emergence of self-organized cooperation in complex gridworld domains.
Using multi-agent deep reinforcement learning, we simulate an agent society for a variety of plausible mechanisms.
Our methods have implications for mechanism design in both human and artificial agent systems.
arXiv Detail & Related papers (2021-02-13T12:04:52Z) - Automated Search for Resource-Efficient Branched Multi-Task Networks [81.48051635183916]
We propose a principled approach, rooted in differentiable neural architecture search, to automatically define branching structures in a multi-task neural network.
We show that our approach consistently finds high-performing branching structures within limited resource budgets.
arXiv Detail & Related papers (2020-08-24T09:49:19Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Multivariate Relations Aggregation Learning in Social Networks [39.576490107740135]
In graph learning tasks of social networks, the identification and utilization of multivariate relationship information are more important.
Existing graph learning methods are based on the neighborhood information diffusion mechanism.
This paper proposes the multivariate relationship aggregation learning (MORE) method, which can effectively capture the multivariate relationship information in the network environment.
arXiv Detail & Related papers (2020-08-09T04:58:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.