CFR-RL: Traffic Engineering with Reinforcement Learning in SDN
- URL: http://arxiv.org/abs/2004.11986v1
- Date: Fri, 24 Apr 2020 20:46:54 GMT
- Title: CFR-RL: Traffic Engineering with Reinforcement Learning in SDN
- Authors: Junjie Zhang, Minghao Ye, Zehua Guo, Chen-Yu Yen, H. Jonathan Chao
- Abstract summary: We propose a Reinforcement-based scheme that learns a policy to select critical flows for each given traffic matrix automatically.
CFR-RL achieves near-optimal performance by rerouting only 10%-21.3% of total traffic.
- Score: 5.718975715943091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional Traffic Engineering (TE) solutions can achieve the optimal or
near-optimal performance by rerouting as many flows as possible. However, they
do not usually consider the negative impact, such as packet out of order, when
frequently rerouting flows in the network. To mitigate the impact of network
disturbance, one promising TE solution is forwarding the majority of traffic
flows using Equal-Cost Multi-Path (ECMP) and selectively rerouting a few
critical flows using Software-Defined Networking (SDN) to balance link
utilization of the network. However, critical flow rerouting is not trivial
because the solution space for critical flow selection is enormous. Moreover,
it is impossible to design a heuristic algorithm for this problem based on
fixed and simple rules, since rule-based heuristics are unable to adapt to the
changes of the traffic matrix and network dynamics. In this paper, we propose
CFR-RL (Critical Flow Rerouting-Reinforcement Learning), a Reinforcement
Learning-based scheme that learns a policy to select critical flows for each
given traffic matrix automatically. CFR-RL then reroutes these selected
critical flows to balance link utilization of the network by formulating and
solving a simple Linear Programming (LP) problem. Extensive evaluations show
that CFR-RL achieves near-optimal performance by rerouting only 10%-21.3% of
total traffic.
Related papers
- Intelligent Routing Algorithm over SDN: Reusable Reinforcement Learning Approach [1.799933345199395]
We develop a reusable RL-aware, reusable routing algorithm, RLSR-Routing over SDN.
Our algorithm shows better performance in terms of load balancing than the traditional approaches.
It also has faster convergence than the non-reusable RL approach when finding paths for multiple traffic demands.
arXiv Detail & Related papers (2024-09-23T17:15:24Z) - A Deep Reinforcement Learning Approach for Adaptive Traffic Routing in
Next-gen Networks [1.1586742546971471]
Next-gen networks require automation and adaptively adjust network configuration based on traffic dynamics.
Traditional techniques that decide traffic policies are usually based on hand-crafted programming optimization and algorithms.
We develop a deep reinforcement learning (DRL) approach for adaptive traffic routing.
arXiv Detail & Related papers (2024-02-07T01:48:29Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Robust Path Selection in Software-defined WANs using Deep Reinforcement
Learning [18.586260468459386]
We propose a data-driven algorithm that does the path selection in the network considering the overhead of route computation and path updates.
Our scheme fares well by a factor of 40% with respect to reducing link utilization compared to traditional TE schemes such as ECMP.
arXiv Detail & Related papers (2022-12-21T16:08:47Z) - Teal: Learning-Accelerated Optimization of WAN Traffic Engineering [68.7863363109948]
We present Teal, a learning-based TE algorithm that leverages the parallel processing power of GPUs to accelerate TE control.
To reduce the problem scale and make learning tractable, Teal employs a multi-agent reinforcement learning (RL) algorithm to independently allocate each traffic demand.
Compared with other TE acceleration schemes, Teal satisfies 6--32% more traffic demand and yields 197--625x speedups.
arXiv Detail & Related papers (2022-10-25T04:46:30Z) - Lyapunov Function Consistent Adaptive Network Signal Control with Back
Pressure and Reinforcement Learning [9.797994846439527]
This study introduces a unified framework using Lyapunov control theory, defining specific Lyapunov functions respectively.
Building on insights from Lyapunov theory, this study designs a reward function for the Reinforcement Learning (RL)-based network signal control.
The proposed algorithm is compared with several traditional and RL-based methods under pure passenger car flow and heterogenous traffic flow including freight.
arXiv Detail & Related papers (2022-10-06T00:22:02Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - Dynamic RAN Slicing for Service-Oriented Vehicular Networks via
Constrained Learning [40.5603189901241]
We investigate a radio access network (RAN) slicing problem for Internet of vehicles (IoV) services with different quality of service (QoS) requirements.
A dynamic RAN slicing framework is presented to dynamically allocate radio spectrum and computing resource.
We show that the RAWS effectively reduces the system cost while satisfying requirements with a high probability, as compared with benchmarks.
arXiv Detail & Related papers (2020-12-03T15:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.