Depth-Optimized Delay-Aware Tree (DO-DAT) for Virtual Network Function
Placement
- URL: http://arxiv.org/abs/2006.01790v1
- Date: Tue, 2 Jun 2020 17:18:20 GMT
- Title: Depth-Optimized Delay-Aware Tree (DO-DAT) for Virtual Network Function
Placement
- Authors: Dimitrios Michael Manias, Hassan Hawilo, Manar Jammal, Abdallah Shami
- Abstract summary: Network Function (NFV) has been identified as a solution, but several challenges must be addressed to ensure its feasibility.
We present a machine learning-based solution to the Virtual Network (VNF) placement problem.
- Score: 3.5584529568201377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the constant increase in demand for data connectivity, network service
providers are faced with the task of reducing their capital and operational
expenses while ensuring continual improvements to network performance. Although
Network Function Virtualization (NFV) has been identified as a solution,
several challenges must be addressed to ensure its feasibility. In this paper,
we present a machine learning-based solution to the Virtual Network Function
(VNF) placement problem. This paper proposes the Depth-Optimized Delay-Aware
Tree (DO-DAT) model by using the particle swarm optimization technique to
optimize decision tree hyper-parameters. Using the Evolved Packet Core (EPC) as
a use case, we evaluate the performance of the model and compare it to a
previously proposed model and a heuristic placement strategy.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Proactive and AoI-aware Failure Recovery for Stateful NFV-enabled
Zero-Touch 6G Networks: Model-Free DRL Approach [0.0]
We propose a model-free deep reinforcement learning (DRL)-based proactive failure recovery framework called zero-touch PFR (ZT-PFR)
ZT-PFR is for the embedded stateful virtual network functions (VNFs) in network function virtualization (NFV) enabled networks.
arXiv Detail & Related papers (2021-02-02T21:40:35Z) - A Machine Learning-Based Migration Strategy for Virtual Network Function
Instances [3.7783523378336104]
We develop the VNF Neural Network for Instance Migration (VNNIM), a migration strategy for VNF instances.
VNNIM is very effective in predicting the post-migration server exhibiting a binary accuracy of 99.07%.
The greatest advantage of VNNIM, however, is its run-time efficiency highlighted through a run-time analysis.
arXiv Detail & Related papers (2020-06-15T15:03:27Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Machine Learning for Performance-Aware Virtual Network Function
Placement [3.5558885788605323]
We develop a machine learning decision tree model that learns from the effective placement of the various Virtual Network Function instances forming a Service Function Chain (SFC)
The model takes several performance-related features from the network as an input and selects the placement of the various VNF instances on network servers with the objective of minimizing the delay between dependent VNF instances.
arXiv Detail & Related papers (2020-01-13T14:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.