A transformer-based deep q learning approach for dynamic load balancing in software-defined networks
- URL: http://arxiv.org/abs/2501.12829v1
- Date: Wed, 22 Jan 2025 12:16:30 GMT
- Title: A transformer-based deep q learning approach for dynamic load balancing in software-defined networks
- Authors: Evans Tetteh Owusu, Kwame Agyemang-Prempeh Agyekum, Marinah Benneh, Pius Ayorna, Justice Owusu Agyemang, George Nii Martey Colley, James Dzisi Gazde,
- Abstract summary: This study proposes a novel approach for dynamic load balancing in Software-Defined Networks (SDNs) using a Transformer-based Deep Q-Network (DQN)
Traditional load balancing mechanisms, such as Round Robin (RR) and Weighted Round Robin (WRR) are static and often struggle to adapt to fluctuating traffic conditions, leading to inefficiencies in network performance.
SDNs offer centralized control and flexibility, providing an ideal platform for implementing machine learning-driven optimization strategies.
- Score: 0.0
- License:
- Abstract: This study proposes a novel approach for dynamic load balancing in Software-Defined Networks (SDNs) using a Transformer-based Deep Q-Network (DQN). Traditional load balancing mechanisms, such as Round Robin (RR) and Weighted Round Robin (WRR), are static and often struggle to adapt to fluctuating traffic conditions, leading to inefficiencies in network performance. In contrast, SDNs offer centralized control and flexibility, providing an ideal platform for implementing machine learning-driven optimization strategies. The core of this research combines a Temporal Fusion Transformer (TFT) for accurate traffic prediction with a DQN model to perform real-time dynamic load balancing. The TFT model predicts future traffic loads, which the DQN uses as input, allowing it to make intelligent routing decisions that optimize throughput, minimize latency, and reduce packet loss. The proposed model was tested against RR and WRR in simulated environments with varying data rates, and the results demonstrate significant improvements in network performance. For the 500MB data rate, the DQN model achieved an average throughput of 0.275 compared to 0.202 and 0.205 for RR and WRR, respectively. Additionally, the DQN recorded lower average latency and packet loss. In the 1000MB simulation, the DQN model outperformed the traditional methods in throughput, latency, and packet loss, reinforcing its effectiveness in managing network loads dynamically. This research presents an important step towards enhancing network performance through the integration of machine learning models within SDNs, potentially paving the way for more adaptive, intelligent network management systems.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Reservoir computing for system identification and predictive control with limited data [3.1484174280822845]
We assess the ability of RNN variants to both learn the dynamics of benchmark control systems and serve as surrogate models for model predictive control (MPC)
We find that echo state networks (ESNs) have a variety of benefits over competing architectures, namely reductions in computational complexity, longer valid prediction times, and reductions in cost of the MPC objective function.
arXiv Detail & Related papers (2024-10-23T21:59:07Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - A Deep Reinforcement Learning Approach for Adaptive Traffic Routing in
Next-gen Networks [1.1586742546971471]
Next-gen networks require automation and adaptively adjust network configuration based on traffic dynamics.
Traditional techniques that decide traffic policies are usually based on hand-crafted programming optimization and algorithms.
We develop a deep reinforcement learning (DRL) approach for adaptive traffic routing.
arXiv Detail & Related papers (2024-02-07T01:48:29Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Latency-aware Unified Dynamic Networks for Efficient Image Recognition [72.8951331472913]
LAUDNet is a framework to bridge the theoretical and practical efficiency gap in dynamic networks.
It integrates three primary dynamic paradigms-spatially adaptive computation, dynamic layer skipping, and dynamic channel skipping.
It can notably reduce the latency of models like ResNet by over 50% on platforms such as V100,3090, and TX2 GPUs.
arXiv Detail & Related papers (2023-08-30T10:57:41Z) - An Intelligent SDWN Routing Algorithm Based on Network Situational
Awareness and Deep Reinforcement Learning [4.085916808788356]
This article introduces an intelligent routing algorithm (DRL-PPONSA) based on deep reinforcement learning with network situational awareness.
Experimental results show that DRL-PPONSA outperforms traditional routing methods in network throughput, delay, packet loss rate, and wireless node distance.
arXiv Detail & Related papers (2023-05-12T14:18:09Z) - RAMP-Net: A Robust Adaptive MPC for Quadrotors via Physics-informed
Neural Network [6.309365332210523]
We propose a Robust Adaptive MPC framework via PINNs (RAMP-Net), which uses a neural network trained partly from simple ODEs and partly from data.
We report 7.8% to 43.2% and 8.04% to 61.5% reduction in tracking errors for speeds ranging from 0.5 to 1.75 m/s compared to two SOTA regression based MPC methods.
arXiv Detail & Related papers (2022-09-19T16:11:51Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - Dynamic Slimmable Network [105.74546828182834]
We develop a dynamic network slimming regime named Dynamic Slimmable Network (DS-Net)
Our DS-Net is empowered with the ability of dynamic inference by the proposed double-headed dynamic gate.
It consistently outperforms its static counterparts as well as state-of-the-art static and dynamic model compression methods.
arXiv Detail & Related papers (2021-03-24T15:25:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.