Reinforcement Learning for Datacenter Congestion Control
- URL: http://arxiv.org/abs/2102.09337v1
- Date: Thu, 18 Feb 2021 13:49:28 GMT
- Title: Reinforcement Learning for Datacenter Congestion Control
- Authors: Chen Tessler, Yuval Shpigelman, Gal Dalal, Amit Mandelbaum, Doron
Haritan Kazakov, Benjamin Fuhrer, Gal Chechik, Shie Mannor
- Abstract summary: Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
- Score: 50.225885814524304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We approach the task of network congestion control in datacenters using
Reinforcement Learning (RL). Successful congestion control algorithms can
dramatically improve latency and overall network throughput. Until today, no
such learning-based algorithms have shown practical potential in this domain.
Evidently, the most popular recent deployments rely on rule-based heuristics
that are tested on a predetermined set of benchmarks. Consequently, these
heuristics do not generalize well to newly-seen scenarios. Contrarily, we
devise an RL-based algorithm with the aim of generalizing to different
configurations of real-world datacenter networks. We overcome challenges such
as partial-observability, non-stationarity, and multi-objectiveness. We further
propose a policy gradient algorithm that leverages the analytical structure of
the reward function to approximate its derivative and improve stability. We
show that this scheme outperforms alternative popular RL approaches, and
generalizes to scenarios that were not seen during training. Our experiments,
conducted on a realistic simulator that emulates communication networks'
behavior, exhibit improved performance concurrently on the multiple considered
metrics compared to the popular algorithms deployed today in real datacenters.
Our algorithm is being productized to replace heuristics in some of the largest
datacenters in the world.
Related papers
- DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models [21.85879890198875]
Decentralized Iterative Merging-And-Training (DIMAT) is a novel decentralized deep learning algorithm.
We show that DIMAT attains faster and higher initial gain in accuracy with independent and identically distributed (IID) and non-IID data, incurring lower communication overhead.
This DIMAT paradigm presents a new opportunity for the future decentralized learning, enhancing its adaptability to real-world with sparse lightweight communication computation.
arXiv Detail & Related papers (2024-04-11T18:34:29Z) - Safe and Accelerated Deep Reinforcement Learning-based O-RAN Slicing: A
Hybrid Transfer Learning Approach [20.344810727033327]
We propose and design a hybrid TL-aided approach to provide safe and accelerated convergence in DRL-based O-RAN slicing.
The proposed hybrid approach shows at least: 7.7% and 20.7% improvements in the average initial reward value and the percentage of converged scenarios.
arXiv Detail & Related papers (2023-09-13T18:58:34Z) - How Does Forecasting Affect the Convergence of DRL Techniques in O-RAN
Slicing? [20.344810727033327]
We propose a novel forecasting-aided DRL approach and its respective O-RAN practical deployment workflow to enhance DRL convergence.
Our approach shows up to 22.8%, 86.3%, and 300% improvements in the average initial reward value, convergence rate, and number of converged scenarios respectively.
arXiv Detail & Related papers (2023-09-01T14:30:04Z) - Semantic-aware Transmission Scheduling: a Monotonicity-driven Deep
Reinforcement Learning Approach [39.681075180578986]
For cyber-physical systems in the 6G era, semantic communications are required to guarantee application-level performance.
In this paper, we first investigate the fundamental properties of the optimal semantic-aware scheduling policy.
We then develop advanced deep reinforcement learning (DRL) algorithms by leveraging the theoretical guidelines.
arXiv Detail & Related papers (2023-05-23T05:45:22Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Federated Deep Reinforcement Learning for the Distributed Control of
NextG Wireless Networks [16.12495409295754]
Next Generation (NextG) networks are expected to support demanding internet tactile applications such as augmented reality and connected autonomous vehicles.
Data-driven approaches can improve the ability of the network to adapt to the current operating conditions.
Deep RL (DRL) has been shown to achieve good performance even in complex environments.
arXiv Detail & Related papers (2021-12-07T03:13:20Z) - Behavioral Priors and Dynamics Models: Improving Performance and Domain
Transfer in Offline RL [82.93243616342275]
We introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE)
MABE is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary.
In experiments that require cross-domain generalization, we find that MABE outperforms prior methods.
arXiv Detail & Related papers (2021-06-16T20:48:49Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Adaptive Serverless Learning [114.36410688552579]
We propose a novel adaptive decentralized training approach, which can compute the learning rate from data dynamically.
Our theoretical results reveal that the proposed algorithm can achieve linear speedup with respect to the number of workers.
To reduce the communication-efficient overhead, we further propose a communication-efficient adaptive decentralized training approach.
arXiv Detail & Related papers (2020-08-24T13:23:02Z) - Critic Regularized Regression [70.8487887738354]
We propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR)
We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces.
arXiv Detail & Related papers (2020-06-26T17:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.