Reinforcement Learning in Computing and Network Convergence
Orchestration
- URL: http://arxiv.org/abs/2209.10753v1
- Date: Thu, 22 Sep 2022 03:10:45 GMT
- Title: Reinforcement Learning in Computing and Network Convergence
Orchestration
- Authors: Aidong Yang, Mohan Wu, Boquan Cheng, Xiaozhou Ye, Ye Ouyang
- Abstract summary: The concept of Computing and Network Convergence (CNC) has been proposed and attracted wide attention.
We design a CNC orchestration method using reinforcement learning (RL), which is the first attempt, that can flexibly allocate and schedule computing resources and network resources.
Experiments shows that the proposed RL-based method can achieve higher profit and lower latency than the greedy method, random selection and balanced-resource method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As computing power is becoming the core productivity of the digital economy
era, the concept of Computing and Network Convergence (CNC), under which
network and computing resources can be dynamically scheduled and allocated
according to users' needs, has been proposed and attracted wide attention.
Based on the tasks' properties, the network orchestration plane needs to
flexibly deploy tasks to appropriate computing nodes and arrange paths to the
computing nodes. This is a orchestration problem that involves resource
scheduling and path arrangement. Since CNC is relatively new, in this paper, we
review some researches and applications on CNC. Then, we design a CNC
orchestration method using reinforcement learning (RL), which is the first
attempt, that can flexibly allocate and schedule computing resources and
network resources. Which aims at high profit and low latency. Meanwhile, we use
multi-factors to determine the optimization objective so that the orchestration
strategy is optimized in terms of total performance from different aspects,
such as cost, profit, latency and system overload in our experiment. The
experiments shows that the proposed RL-based method can achieve higher profit
and lower latency than the greedy method, random selection and
balanced-resource method. We demonstrate RL is suitable for CNC orchestration.
This paper enlightens the RL application on CNC orchestration.
Related papers
- DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Generative AI-enabled Quantum Computing Networks and Intelligent
Resource Allocation [80.78352800340032]
Quantum computing networks execute large-scale generative AI computation tasks and advanced quantum algorithms.
efficient resource allocation in quantum computing networks is a critical challenge due to qubit variability and network complexity.
We introduce state-of-the-art reinforcement learning (RL) algorithms, from generative learning to quantum machine learning for optimal quantum resource allocation.
arXiv Detail & Related papers (2024-01-13T17:16:38Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Intelligence-Endogenous Management Platform for Computing and Network
Convergence [33.45559800534038]
We present the concept of an intelligence-endogenous management platform for CNCs called emphCNC brain based on artificial intelligence technologies.
It aims at efficiently matching the supply and demand with high heterogeneity in a CNC via four key building blocks, i.e., perception, scheduling, adaptation, and governance.
It is evaluated on a CNC testbed that integrates two open-source and popular frameworks and a real-world business dataset provided by Microsoft Azure.
arXiv Detail & Related papers (2023-08-07T10:12:15Z) - Elastic Entangled Pair and Qubit Resource Management in Quantum Cloud
Computing [73.7522199491117]
Quantum cloud computing (QCC) offers a promising approach to efficiently provide quantum computing resources.
The fluctuations in user demand and quantum circuit requirements are challenging for efficient resource provisioning.
We propose a resource allocation model to provision quantum computing and networking resources.
arXiv Detail & Related papers (2023-07-25T00:38:46Z) - A Heuristically Assisted Deep Reinforcement Learning Approach for
Network Slice Placement [0.7885276250519428]
We introduce a hybrid placement solution based on Deep Reinforcement Learning (DRL) and a dedicated optimization based on the Power of Two Choices principle.
The proposed Heuristically-Assisted DRL (HA-DRL) allows to accelerate the learning process and gain in resource usage when compared against other state-of-the-art approaches.
arXiv Detail & Related papers (2021-05-14T10:04:17Z) - Smart Scheduling based on Deep Reinforcement Learning for Cellular
Networks [18.04856086228028]
We propose a smart scheduling scheme based on deep reinforcement learning (DRL)
We provide implementation-friend designs, i.e., a scalable neural network design for the agent and a virtual environment training framework.
We show that the DRL-based smart scheduling outperforms the conventional scheduling method and can be adopted in practical systems.
arXiv Detail & Related papers (2021-03-22T02:09:16Z) - Reinforcement Learning on Computational Resource Allocation of
Cloud-based Wireless Networks [22.06811314358283]
Wireless networks used for Internet of Things (IoT) are expected to largely involve cloud-based computing and processing.
In a cloud environment, dynamic computational resource allocation is essential to save energy while maintaining the performance of the processes.
This paper models this dynamic computational resource allocation problem into a Markov Decision Process (MDP) and designs a model-based reinforcement-learning agent to optimise the dynamic resource allocation of the CPU usage.
The results show that our agent rapidly converges to the optimal policy, stably performs in different settings, outperforms or at least equally performs compared to a baseline algorithm in energy savings for different scenarios.
arXiv Detail & Related papers (2020-10-10T15:16:26Z) - When Deep Reinforcement Learning Meets Federated Learning: Intelligent
Multi-Timescale Resource Management for Multi-access Edge Computing in 5G
Ultra Dense Network [31.274279003934268]
We first propose an intelligent ultra-dense edge computing (I-UDEC) framework, which integrates blockchain and AI into 5G edge computing networks.
In order to achieve real-time and low overhead computation offloading decisions and resource allocation strategies, we design a novel two-timescale deep reinforcement learning (textit2Ts-DRL) approach.
Our proposed algorithm can reduce task execution time up to 31.87%.
arXiv Detail & Related papers (2020-09-22T15:08:00Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.