Relational Deep Reinforcement Learning for Routing in Wireless Networks
- URL: http://arxiv.org/abs/2012.15700v1
- Date: Thu, 31 Dec 2020 16:28:21 GMT
- Title: Relational Deep Reinforcement Learning for Routing in Wireless Networks
- Authors: Victoria Manfredi, Alicia Wolfe, Bing Wang, Xiaolan Zhang
- Abstract summary: We develop a distributed routing strategy based on deep reinforcement learning that generalizes to diverse traffic patterns, congestion levels, network connectivity, and link dynamics.
Our algorithm outperforms shortest path and backpressure routing with respect to packets delivered and delay per packet.
- Score: 2.997420836766863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While routing in wireless networks has been studied extensively, existing
protocols are typically designed for a specific set of network conditions and
so cannot accommodate any drastic changes in those conditions. For instance,
protocols designed for connected networks cannot be easily applied to
disconnected networks. In this paper, we develop a distributed routing strategy
based on deep reinforcement learning that generalizes to diverse traffic
patterns, congestion levels, network connectivity, and link dynamics. We make
the following key innovations in our design: (i) the use of relational features
as inputs to the deep neural network approximating the decision space, which
enables our algorithm to generalize to diverse network conditions, (ii) the use
of packet-centric decisions to transform the routing problem into an episodic
task by viewing packets, rather than wireless devices, as reinforcement
learning agents, which provides a natural way to propagate and model rewards
accurately during learning, and (iii) the use of extended-time actions to model
the time spent by a packet waiting in a queue, which reduces the amount of
training data needed and allows the learning algorithm to converge more
quickly. We evaluate our routing algorithm using a packet-level simulator and
show that the policy our algorithm learns during training is able to generalize
to larger and more congested networks, different topologies, and diverse link
dynamics. Our algorithm outperforms shortest path and backpressure routing with
respect to packets delivered and delay per packet.
Related papers
- Learning Sub-Second Routing Optimization in Computer Networks requires Packet-Level Dynamics [15.018408728324887]
Reinforcement Learning can help to learn network representations that provide routing decisions.
We present $textitPackeRL$, the first packet-level Reinforcement Learning environment for routing in generic network topologies.
We also introduce two new algorithms for learning sub-second Routing Optimization.
arXiv Detail & Related papers (2024-10-14T11:03:46Z) - Learning State-Augmented Policies for Information Routing in
Communication Networks [92.59624401684083]
We develop a novel State Augmentation (SA) strategy to maximize the aggregate information at source nodes using graph neural network (GNN) architectures.
We leverage an unsupervised learning procedure to convert the output of the GNN architecture to optimal information routing strategies.
In the experiments, we perform the evaluation on real-time network topologies to validate our algorithms.
arXiv Detail & Related papers (2023-09-30T04:34:25Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Multi-agent Reinforcement Learning with Graph Q-Networks for Antenna
Tuning [60.94661435297309]
The scale of mobile networks makes it challenging to optimize antenna parameters using manual intervention or hand-engineered strategies.
We propose a new multi-agent reinforcement learning algorithm to optimize mobile network configurations globally.
We empirically demonstrate the performance of the algorithm on an antenna tilt tuning problem and a joint tilt and power control problem in a simulated environment.
arXiv Detail & Related papers (2023-01-20T17:06:34Z) - Robust Path Selection in Software-defined WANs using Deep Reinforcement
Learning [18.586260468459386]
We propose a data-driven algorithm that does the path selection in the network considering the overhead of route computation and path updates.
Our scheme fares well by a factor of 40% with respect to reducing link utilization compared to traditional TE schemes such as ECMP.
arXiv Detail & Related papers (2022-12-21T16:08:47Z) - Learning an Adaptive Forwarding Strategy for Mobile Wireless Networks:
Resource Usage vs. Latency [2.608874253011]
We use deep reinforcement learning to learn a scalable and generalizable single-copy routing strategy for mobile networks.
Our results show our learned single-copy routing strategy outperforms all other strategies in terms of delay except for the optimal strategy.
arXiv Detail & Related papers (2022-07-23T01:17:23Z) - MAMRL: Exploiting Multi-agent Meta Reinforcement Learning in WAN Traffic
Engineering [4.051011665760136]
Traffic optimization challenges, such as load balancing, flow scheduling, and improving packet delivery time, are difficult online decision-making problems in wide area networks (WAN)
We develop and evaluate a model-free approach, applying multi-agent meta reinforcement learning (MAMRL) that can determine the next-hop of each packet to get it delivered to its destination with minimum time overall.
arXiv Detail & Related papers (2021-11-30T03:01:01Z) - Offline Contextual Bandits for Wireless Network Optimization [107.24086150482843]
In this paper, we investigate how to learn policies that can automatically adjust the configuration parameters of every cell in the network in response to the changes in the user demand.
Our solution combines existent methods for offline learning and adapts them in a principled way to overcome crucial challenges arising in this context.
arXiv Detail & Related papers (2021-11-11T11:31:20Z) - Packet Routing with Graph Attention Multi-agent Reinforcement Learning [4.78921052969006]
We develop a model-free and data-driven routing strategy by leveraging reinforcement learning (RL)
Considering the graph nature of the network topology, we design a multi-agent RL framework in combination with Graph Neural Network (GNN)
arXiv Detail & Related papers (2021-07-28T06:20:34Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - All at Once Network Quantization via Collaborative Knowledge Transfer [56.95849086170461]
We develop a novel collaborative knowledge transfer approach for efficiently training the all-at-once quantization network.
Specifically, we propose an adaptive selection strategy to choose a high-precision enquoteteacher for transferring knowledge to the low-precision student.
To effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network.
arXiv Detail & Related papers (2021-03-02T03:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.