Graph Convolutional Reinforcement Learning for Collaborative Queuing
Agents
- URL: http://arxiv.org/abs/2205.12009v1
- Date: Tue, 24 May 2022 11:53:20 GMT
- Title: Graph Convolutional Reinforcement Learning for Collaborative Queuing
Agents
- Authors: Hassan Fawaz, Julien Lesca, Pham Tran Anh Quang, J\'er\'emie Leguay,
Djamal Zeghlache, and Paolo Medagliani
- Abstract summary: We propose a novel graph-convolution based, multi-agent reinforcement learning approach known as DGN.
We show that our DGN-based approach meets stringent throughput and delay requirements across all scenarios.
- Score: 6.3120870639037285
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we explore the use of multi-agent deep learning as well as
learning to cooperate principles to meet stringent service level agreements, in
terms of throughput and end-to-end delay, for a set of classified network
flows. We consider agents built on top of a weighted fair queuing algorithm
that continuously set weights for three flow groups: gold, silver, and bronze.
We rely on a novel graph-convolution based, multi-agent reinforcement learning
approach known as DGN. As benchmarks, we propose centralized and distributed
deep Q-network approaches and evaluate their performances in different network,
traffic, and routing scenarios, highlighting the effectiveness of our proposals
and the importance of agent cooperation. We show that our DGN-based approach
meets stringent throughput and delay requirements across all scenarios.
Related papers
- Scalable spectral representations for network multiagent control [53.631272539560435]
A popular model for multi-agent control, Network Markov Decision Processes (MDPs) pose a significant challenge to efficient learning.
We first derive scalable spectral local representations for network MDPs, which induces a network linear subspace for the local $Q$-function of each agent.
We design a scalable algorithmic framework for continuous state-action network MDPs, and provide end-to-end guarantees for the convergence of our algorithm.
arXiv Detail & Related papers (2024-10-22T17:45:45Z) - Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - On the dynamics of multi agent nonlinear filtering and learning [2.206852421529135]
Multiagent systems aim to accomplish highly complex learning tasks through decentralised consensus seeking dynamics.
This article examines the behaviour of multiagent networked systems with nonlinear filtering/learning dynamics.
arXiv Detail & Related papers (2023-09-07T08:39:53Z) - Collaborative Information Dissemination with Graph-based Multi-Agent
Reinforcement Learning [2.9904113489777826]
This paper introduces a Multi-Agent Reinforcement Learning (MARL) approach for efficient information dissemination.
We propose a Partially Observable Game (POSG) for information dissemination empowering each agent to decide on message forwarding independently.
Our experimental results show that our trained policies outperform existing methods.
arXiv Detail & Related papers (2023-08-25T21:30:16Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Beyond Rewards: a Hierarchical Perspective on Offline Multiagent
Behavioral Analysis [14.656957226255628]
We introduce a model-agnostic method for discovery of behavior clusters in multiagent domains.
Our framework makes no assumption about agents' underlying learning algorithms, does not require access to their latent states or models, and can be trained using entirely offline observational data.
arXiv Detail & Related papers (2022-06-17T23:07:33Z) - Graph Convolutional Value Decomposition in Multi-Agent Reinforcement
Learning [9.774412108791218]
We propose a novel framework for value function factorization in deep reinforcement learning.
In particular, we consider the team of agents as the set of nodes of a complete directed graph.
We introduce a mixing GNN module, which is responsible for i) factorizing the team state-action value function into individual per-agent observation-action value functions, and ii) explicit credit assignment to each agent in terms of fractions of the global team reward.
arXiv Detail & Related papers (2020-10-09T18:01:01Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z) - Coagent Networks Revisited [10.45819881530349]
Coagent networks formalize the concept of arbitrary networks of agents that collaborate to take actions in a reinforcement learning environment.
We first provide a unifying perspective on the many diverse examples that fall under coagent networks.
We do so by formalizing the rules of execution in a coagent network, enabled by the novel and intuitive idea of execution paths.
arXiv Detail & Related papers (2020-01-28T17:31:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.