Robust and Scalable Routing with Multi-Agent Deep Reinforcement Learning
for MANETs
- URL: http://arxiv.org/abs/2101.03273v2
- Date: Mon, 29 Mar 2021 02:53:58 GMT
- Title: Robust and Scalable Routing with Multi-Agent Deep Reinforcement Learning
for MANETs
- Authors: Saeed Kaviani, Bo Ryu, Ejaz Ahmed, Kevin A. Larson, Anh Le, Alex
Yahja, Jae H. Kim
- Abstract summary: DeepCQ+ routing integrates emerging multi-agent deep reinforcement learning techniques into existing Q-learning-based routing protocols.
It achieves persistently higher performance across a wide range of MANET configurations while training only on a limited range of network parameters and conditions.
- Score: 1.8375389588718993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Highly dynamic mobile ad-hoc networks (MANETs) are continuing to serve as one
of the most challenging environments to develop and deploy robust, efficient,
and scalable routing protocols. In this paper, we present DeepCQ+ routing
which, in a novel manner, integrates emerging multi-agent deep reinforcement
learning (MADRL) techniques into existing Q-learning-based routing protocols
and their variants, and achieves persistently higher performance across a wide
range of MANET configurations while training only on a limited range of network
parameters and conditions. Quantitatively, DeepCQ+ shows consistently higher
end-to-end throughput with lower overhead compared to its Q-learning-based
counterparts with the overall gain of 10-15% in its efficiency. Qualitatively
and more significantly, DeepCQ+ maintains remarkably similar performance gains
under many scenarios that it was not trained for in terms of network sizes,
mobility conditions, and traffic dynamics. To the best of our knowledge, this
is the first successful demonstration of MADRL for the MANET routing problem
that achieves and maintains a high degree of scalability and robustness even in
the environments that are outside the trained range of scenarios. This implies
that the proposed hybrid design approach of DeepCQ+ that combines MADRL and
Q-learning significantly increases its practicality and explainability because
the real-world MANET environment will likely vary outside the trained range of
MANET scenarios.
Related papers
- Differentiable Discrete Event Simulation for Queuing Network Control [7.965453961211742]
Queueing network control poses distinct challenges, including highity, large state and action spaces, and lack of stability.
We propose a scalable framework for policy optimization based on differentiable discrete event simulation.
Our methods can flexibly handle realistic scenarios, including systems operating in non-stationary environments.
arXiv Detail & Related papers (2024-09-05T17:53:54Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Multi Agent DeepRL based Joint Power and Subchannel Allocation in IAB
networks [0.0]
Integrated Access and Backhauling (IRL) is a viable approach for meeting the unprecedented need for higher data rates of future generations.
In this paper, we show how we can use Deep Q-Learning Network to handle problems with huge action spaces associated with fractional nodes.
arXiv Detail & Related papers (2023-08-31T21:30:25Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Residual Q-Networks for Value Function Factorizing in Multi-Agent
Reinforcement Learning [0.0]
We propose a novel concept of Residual Q-Networks (RQNs) for Multi-Agent Reinforcement Learning (MARL)
The RQN learns to transform the individual Q-value trajectories in a way that preserves the Individual-Global-Max criteria (IGM)
The proposed method converges faster, with increased stability and shows robust performance in a wider family of environments.
arXiv Detail & Related papers (2022-05-30T16:56:06Z) - Pervasive Machine Learning for Smart Radio Environments Enabled by
Reconfigurable Intelligent Surfaces [56.35676570414731]
The emerging technology of Reconfigurable Intelligent Surfaces (RISs) is provisioned as an enabler of smart wireless environments.
RISs offer a highly scalable, low-cost, hardware-efficient, and almost energy-neutral solution for dynamic control of the propagation of electromagnetic signals over the wireless medium.
One of the major challenges with the envisioned dense deployment of RISs in such reconfigurable radio environments is the efficient configuration of multiple metasurfaces.
arXiv Detail & Related papers (2022-05-08T06:21:33Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Hierarchical Multi-Agent DRL-Based Framework for Joint Multi-RAT
Assignment and Dynamic Resource Allocation in Next-Generation HetNets [21.637440368520487]
This paper considers the problem of cost-aware downlink sum-rate via joint optimal radio access technologies (RATs) assignment and power allocation in next-generation wireless networks (HetNets)
We propose a hierarchical multi-agent deep reinforcement learning (DRL) framework, called DeepRAT, to solve it efficiently and learn system dynamics.
In particular, the DeepRAT framework decomposes the problem into two main stages; the RATs-EDs assignment stage, which implements a single-agent Deep Q Network algorithm, and the power allocation stage, which utilizes a multi-agent Deep Deterministic Policy Gradient
arXiv Detail & Related papers (2022-02-28T09:49:44Z) - Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge
Intelligence [76.96698721128406]
Mobile edge computing (MEC) considered a novel paradigm for computation and delay-sensitive tasks in fifth generation (5G) networks and beyond.
This paper provides a comprehensive research review on free-enabled RL and offers insight for development.
arXiv Detail & Related papers (2022-01-27T10:02:54Z) - DeepCQ+: Robust and Scalable Routing with Multi-Agent Deep Reinforcement
Learning for Highly Dynamic Networks [2.819857535390181]
DeepCQ+ routing protocol integrates emerging multi-agent deep reinforcement learning (MADRL) techniques into existing Q-learning-based routing protocols.
Extensive simulation shows that DeepCQ+ yields significantly increased end-to-end throughput with lower overhead.
DeepCQ+ maintains remarkably similar performance gains under many scenarios that it was not trained for.
arXiv Detail & Related papers (2021-11-29T23:05:49Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.