On the Robustness of Controlled Deep Reinforcement Learning for Slice
Placement
- URL: http://arxiv.org/abs/2108.02505v1
- Date: Thu, 5 Aug 2021 10:24:33 GMT
- Title: On the Robustness of Controlled Deep Reinforcement Learning for Slice
Placement
- Authors: Jose Jurandir Alves Esteves, Amina Boubendir, Fabrice Guillemin,
Pierre Sens
- Abstract summary: We compare two Deep Reinforcement Learning algorithms: a pure DRL-based algorithm and a hybrid DRL as a hybrid DRL-heuristic algorithm.
The evaluation results show that the proposed hybrid DRL-heuristic approach is more robust and reliable in case of unpredictable network load changes than pure DRL.
- Score: 0.8459686722437155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evaluation of the impact of using Machine Learning in the management of
softwarized networks is considered in multiple research works. Beyond that, we
propose to evaluate the robustness of online learning for optimal network slice
placement. A major assumption to this study is to consider that slice request
arrivals are non-stationary. In this context, we simulate unpredictable network
load variations and compare two Deep Reinforcement Learning (DRL) algorithms: a
pure DRL-based algorithm and a heuristically controlled DRL as a hybrid
DRL-heuristic algorithm, to assess the impact of these unpredictable changes of
traffic load on the algorithms performance. We conduct extensive simulations of
a large-scale operator infrastructure. The evaluation results show that the
proposed hybrid DRL-heuristic approach is more robust and reliable in case of
unpredictable network load changes than pure DRL as it reduces the performance
degradation. These results are follow-ups for a series of recent research we
have performed showing that the proposed hybrid DRL-heuristic approach is
efficient and more adapted to real network scenarios than pure DRL.
Related papers
- Broad Critic Deep Actor Reinforcement Learning for Continuous Control [5.440090782797941]
A novel hybrid architecture for actor-critic reinforcement learning (RL) algorithms is introduced.
The proposed architecture integrates the broad learning system (BLS) with deep neural networks (DNNs)
The effectiveness of the proposed algorithm is evaluated by applying it to two classic continuous control tasks.
arXiv Detail & Related papers (2024-11-24T12:24:46Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - Federated Deep Reinforcement Learning for the Distributed Control of
NextG Wireless Networks [16.12495409295754]
Next Generation (NextG) networks are expected to support demanding internet tactile applications such as augmented reality and connected autonomous vehicles.
Data-driven approaches can improve the ability of the network to adapt to the current operating conditions.
Deep RL (DRL) has been shown to achieve good performance even in complex environments.
arXiv Detail & Related papers (2021-12-07T03:13:20Z) - DRL-based Slice Placement under Realistic Network Load Conditions [0.8459686722437155]
We propose a network slice placement optimization solution based on Deep Reinforcement Learning (DRL)
The solution is adapted to networks with large scale and under non-stationary traffic conditions (namely, the network load)
We demonstrate the applicability of the proposed solution and its higher and stable performance over a non-controlled DRL-based solution.
arXiv Detail & Related papers (2021-09-27T07:58:45Z) - DRL-based Slice Placement Under Non-Stationary Conditions [0.8459686722437155]
We consider online learning for optimal network slice placement under the assumption that slice requests arrive according to a non-stationary process.
We specifically propose two pure-DRL algorithms and two families of hybrid DRL-heuristic algorithms.
We show that the proposed hybrid DRL-heuristic algorithms require three orders of magnitude of learning episodes less than pure-DRL to achieve convergence.
arXiv Detail & Related papers (2021-08-05T10:05:12Z) - Behavioral Priors and Dynamics Models: Improving Performance and Domain
Transfer in Offline RL [82.93243616342275]
We introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE)
MABE is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary.
In experiments that require cross-domain generalization, we find that MABE outperforms prior methods.
arXiv Detail & Related papers (2021-06-16T20:48:49Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Instabilities of Offline RL with Pre-Trained Neural Representation [127.89397629569808]
In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated.
Recent theoretical advances have shown that such sample-efficient offline RL is indeed possible provided certain strong representational conditions hold.
This work studies these issues from an empirical perspective to gauge how stable offline RL methods are.
arXiv Detail & Related papers (2021-03-08T18:06:44Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - Stacked Auto Encoder Based Deep Reinforcement Learning for Online
Resource Scheduling in Large-Scale MEC Networks [44.40722828581203]
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet of things (IoT) users.
A deep reinforcement learning (DRL) based solution is proposed, which includes the following components.
A preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy.
arXiv Detail & Related papers (2020-01-24T23:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.