DeepCQ+: Robust and Scalable Routing with Multi-Agent Deep Reinforcement
Learning for Highly Dynamic Networks
- URL: http://arxiv.org/abs/2111.15013v1
- Date: Mon, 29 Nov 2021 23:05:49 GMT
- Title: DeepCQ+: Robust and Scalable Routing with Multi-Agent Deep Reinforcement
Learning for Highly Dynamic Networks
- Authors: Saeed Kaviani, Bo Ryu, Ejaz Ahmed, Kevin Larson, Anh Le, Alex Yahja,
and Jae H. Kim
- Abstract summary: DeepCQ+ routing protocol integrates emerging multi-agent deep reinforcement learning (MADRL) techniques into existing Q-learning-based routing protocols.
Extensive simulation shows that DeepCQ+ yields significantly increased end-to-end throughput with lower overhead.
DeepCQ+ maintains remarkably similar performance gains under many scenarios that it was not trained for.
- Score: 2.819857535390181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Highly dynamic mobile ad-hoc networks (MANETs) remain as one of the most
challenging environments to develop and deploy robust, efficient, and scalable
routing protocols. In this paper, we present DeepCQ+ routing protocol which, in
a novel manner integrates emerging multi-agent deep reinforcement learning
(MADRL) techniques into existing Q-learning-based routing protocols and their
variants and achieves persistently higher performance across a wide range of
topology and mobility configurations. While keeping the overall protocol
structure of the Q-learning-based routing protocols, DeepCQ+ replaces
statically configured parameterized thresholds and hand-written rules with
carefully designed MADRL agents such that no configuration of such parameters
is required a priori. Extensive simulation shows that DeepCQ+ yields
significantly increased end-to-end throughput with lower overhead and no
apparent degradation of end-to-end delays (hop counts) compared to its
Q-learning based counterparts. Qualitatively, and perhaps more significantly,
DeepCQ+ maintains remarkably similar performance gains under many scenarios
that it was not trained for in terms of network sizes, mobility conditions, and
traffic dynamics. To the best of our knowledge, this is the first successful
application of the MADRL framework for the MANET routing problem that
demonstrates a high degree of scalability and robustness even under
environments that are outside the trained range of scenarios. This implies that
our MARL-based DeepCQ+ design solution significantly improves the performance
of Q-learning based CQ+ baseline approach for comparison and increases its
practicality and explainability because the real-world MANET environment will
likely vary outside the trained range of MANET scenarios. Additional techniques
to further increase the gains in performance and scalability are discussed.
Related papers
- Differentiable Discrete Event Simulation for Queuing Network Control [7.965453961211742]
Queueing network control poses distinct challenges, including highity, large state and action spaces, and lack of stability.
We propose a scalable framework for policy optimization based on differentiable discrete event simulation.
Our methods can flexibly handle realistic scenarios, including systems operating in non-stationary environments.
arXiv Detail & Related papers (2024-09-05T17:53:54Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - A Deep Reinforcement Learning Approach for Adaptive Traffic Routing in
Next-gen Networks [1.1586742546971471]
Next-gen networks require automation and adaptively adjust network configuration based on traffic dynamics.
Traditional techniques that decide traffic policies are usually based on hand-crafted programming optimization and algorithms.
We develop a deep reinforcement learning (DRL) approach for adaptive traffic routing.
arXiv Detail & Related papers (2024-02-07T01:48:29Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Multi Agent DeepRL based Joint Power and Subchannel Allocation in IAB
networks [0.0]
Integrated Access and Backhauling (IRL) is a viable approach for meeting the unprecedented need for higher data rates of future generations.
In this paper, we show how we can use Deep Q-Learning Network to handle problems with huge action spaces associated with fractional nodes.
arXiv Detail & Related papers (2023-08-31T21:30:25Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Residual Q-Networks for Value Function Factorizing in Multi-Agent
Reinforcement Learning [0.0]
We propose a novel concept of Residual Q-Networks (RQNs) for Multi-Agent Reinforcement Learning (MARL)
The RQN learns to transform the individual Q-value trajectories in a way that preserves the Individual-Global-Max criteria (IGM)
The proposed method converges faster, with increased stability and shows robust performance in a wider family of environments.
arXiv Detail & Related papers (2022-05-30T16:56:06Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Robust and Scalable Routing with Multi-Agent Deep Reinforcement Learning
for MANETs [1.8375389588718993]
DeepCQ+ routing integrates emerging multi-agent deep reinforcement learning techniques into existing Q-learning-based routing protocols.
It achieves persistently higher performance across a wide range of MANET configurations while training only on a limited range of network parameters and conditions.
arXiv Detail & Related papers (2021-01-09T02:26:14Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.