Adaptive Services Function Chain Orchestration For Digital Health Twin
Use Cases: Heuristic-boosted Q-Learning Approach
- URL: http://arxiv.org/abs/2304.12853v1
- Date: Tue, 25 Apr 2023 14:25:08 GMT
- Title: Adaptive Services Function Chain Orchestration For Digital Health Twin
Use Cases: Heuristic-boosted Q-Learning Approach
- Authors: Jamila Alsayed Kassem, Li Zhong, Arie Taal, Paola Grosso
- Abstract summary: Digital Twin (DT) is a prominent technology to utilise and deploy within the healthcare sector.
Yet, the main challenges facing such applications are: Strict health data-sharing policies, high-performance network requirements, and possible infrastructure resource limitations.
We define a Cloud-Native Network orchestrator on top of a multi-node cluster mesh infrastructure for flexible and dynamic container scheduling.
- Score: 2.3513645401551333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital Twin (DT) is a prominent technology to utilise and deploy within the
healthcare sector. Yet, the main challenges facing such applications are:
Strict health data-sharing policies, high-performance network requirements, and
possible infrastructure resource limitations. In this paper, we address all the
challenges by provisioning adaptive Virtual Network Functions (VNFs) to enforce
security policies associated with different data-sharing scenarios. We define a
Cloud-Native Network orchestrator on top of a multi-node cluster mesh
infrastructure for flexible and dynamic container scheduling. The proposed
framework considers the intended data-sharing use case, the policies
associated, and infrastructure configurations, then provision Service Function
Chaining (SFC) and provides routing configurations accordingly with little to
no human intervention. Moreover, what is \textit{optimal} when deploying SFC is
dependent on the use case itself, and we tune the hyperparameters to prioritise
resource utilisation or latency in an effort to comply with the performance
requirements. As a result, we provide an adaptive network orchestration for
digital health twin use cases, that is policy-aware, requirements-aware, and
resource-aware.
Related papers
- Hard-Constrained Neural Networks with Universal Approximation Guarantees [5.3663546125491735]
HardNet is a framework for constructing neural networks that inherently satisfy hard constraints without sacrificing model capacity.
We show that HardNet retains the universal approximation capabilities of neural networks.
arXiv Detail & Related papers (2024-10-14T17:59:24Z) - Joint Admission Control and Resource Allocation of Virtual Network Embedding via Hierarchical Deep Reinforcement Learning [69.00997996453842]
We propose a deep Reinforcement Learning approach to learn a joint Admission Control and Resource Allocation policy for virtual network embedding.
We show that HRL-ACRA outperforms state-of-the-art baselines in terms of both the acceptance ratio and long-term average revenue.
arXiv Detail & Related papers (2024-06-25T07:42:30Z) - Scheduling Inference Workloads on Distributed Edge Clusters with
Reinforcement Learning [11.007816552466952]
This paper focuses on the problem of scheduling inference queries on Deep Neural Networks in edge networks at short timescales.
By means of simulations, we analyze several policies in the realistic network settings and workloads of a large ISP.
We design ASET, a Reinforcement Learning based scheduling algorithm able to adapt its decisions according to the system conditions.
arXiv Detail & Related papers (2023-01-31T13:23:34Z) - On-Demand Resource Management for 6G Wireless Networks Using
Knowledge-Assisted Dynamic Neural Networks [13.318287511072354]
We study the on-demand wireless resource orchestration problem with the focus on the computing delay in orchestration decision-making process.
A dynamic neural network (DyNN)-based method is proposed, where the model complexity can be adjusted according to the service requirements.
By exploiting the knowledge, the width of DyNN can be selected in a timely manner, further improving the performance of orchestration.
arXiv Detail & Related papers (2022-08-02T23:40:03Z) - Learning Resilient Radio Resource Management Policies with Graph Neural
Networks [124.89036526192268]
We formulate a resilient radio resource management problem with per-user minimum-capacity constraints.
We show that we can parameterize the user selection and power control policies using a finite set of parameters.
Thanks to such adaptation, our proposed method achieves a superior tradeoff between the average rate and the 5th percentile rate.
arXiv Detail & Related papers (2022-03-07T19:40:39Z) - Offline Contextual Bandits for Wireless Network Optimization [107.24086150482843]
In this paper, we investigate how to learn policies that can automatically adjust the configuration parameters of every cell in the network in response to the changes in the user demand.
Our solution combines existent methods for offline learning and adapts them in a principled way to overcome crucial challenges arising in this context.
arXiv Detail & Related papers (2021-11-11T11:31:20Z) - Deep Reinforcement Learning for Resource Constrained Multiclass
Scheduling in Wireless Networks [0.0]
In our setup, the available limited bandwidth resources are allocated in order to serve randomly arriving service demands.
We propose a distributional Deep Deterministic Policy Gradient (DDPG) algorithm combined with Deep Sets to tackle the problem.
Our proposed algorithm is tested on both synthetic and real data, showing consistent gains against state-of-the-art conventional methods.
arXiv Detail & Related papers (2020-11-27T09:49:38Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Using Reinforcement Learning to Allocate and Manage Service Function
Chains in Cellular Networks [0.456877715768796]
We propose the use of reinforcement learning to deploy a service function chain (SFC) of cellular network service and manage the network virtual functions (VNFs)
The main purpose is to reduce the number of lost packets taking into account the energy consumption of the servers.
Preliminary results show that the agent is able to allocate the SFC and manage the VNFs, reducing the number of lost packets.
arXiv Detail & Related papers (2020-06-12T17:38:23Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Deep Learning for Radio Resource Allocation with Diverse
Quality-of-Service Requirements in 5G [53.23237216769839]
We develop a deep learning framework to approximate the optimal resource allocation policy for base stations.
We find that a fully-connected neural network (NN) cannot fully guarantee the requirements due to the approximation errors and quantization errors of the numbers of subcarriers.
Considering that the distribution of wireless channels and the types of services in the wireless networks are non-stationary, we apply deep transfer learning to update NNs in non-stationary wireless networks.
arXiv Detail & Related papers (2020-03-29T04:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.