Predicting Bandwidth Utilization on Network Links Using Machine Learning
- URL: http://arxiv.org/abs/2112.02417v1
- Date: Sat, 4 Dec 2021 19:47:41 GMT
- Title: Predicting Bandwidth Utilization on Network Links Using Machine Learning
- Authors: Maxime Labonne, Charalampos Chatzinakis, Alexis Olivereau
- Abstract summary: We present a solution to predict the bandwidth utilization between different network links with a very high accuracy.
A simulated network is created to collect data related to the performance of the network links on every interface.
We show that the proposed solution can be used in real time with a reaction managed by a Software-Defined Networking (SDN) platform.
- Score: 0.966840768820136
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Predicting the bandwidth utilization on network links can be extremely useful
for detecting congestion in order to correct them before they occur. In this
paper, we present a solution to predict the bandwidth utilization between
different network links with a very high accuracy. A simulated network is
created to collect data related to the performance of the network links on
every interface. These data are processed and expanded with feature engineering
in order to create a training set. We evaluate and compare three types of
machine learning algorithms, namely ARIMA (AutoRegressive Integrated Moving
Average), MLP (Multi Layer Perceptron) and LSTM (Long Short-Term Memory), in
order to predict the future bandwidth consumption. The LSTM outperforms ARIMA
and MLP with very accurate predictions, rarely exceeding a 3\% error (40\% for
ARIMA and 20\% for the MLP). We then show that the proposed solution can be
used in real time with a reaction managed by a Software-Defined Networking
(SDN) platform.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - ConvLSTMTransNet: A Hybrid Deep Learning Approach for Internet Traffic Telemetry [0.0]
We present a novel hybrid deep learning model, named ConvLSTMTransNet, designed for time series prediction.
Our findings demonstrate that ConvLSTMTransNet significantly outperforms the baseline models by approximately 10% in terms of prediction accuracy.
arXiv Detail & Related papers (2024-09-20T03:12:57Z) - Switching in the Rain: Predictive Wireless x-haul Network
Reconfiguration [17.891837432766764]
Wireless x-haul networks rely on microwave and millimeter-wave links between 4G and/or 5G base-stations to support ultra-high data rate and ultra-low latency.
precipitation may cause severe signal attenuation, which significantly degrades the network performance.
We develop a Predictive Network Reconfiguration framework that uses historical data to predict the future condition of each link and then prepares the network ahead of time for imminent disturbances.
arXiv Detail & Related papers (2022-03-07T13:40:38Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - A Learning-Based Fast Uplink Grant for Massive IoT via Support Vector
Machines and Long Short-Term Memory [8.864453148536057]
3IoT introduced the need to use fast uplink grant (FUG) allocation in order to reduce latency and increase reliability for smart internet-of-things (mMTC) applications.
We propose a novel FUG allocation based on support machine scheduler (SVM)
Second, LSTM architecture is used for traffic prediction and correction techniques to overcome prediction errors.
arXiv Detail & Related papers (2021-08-02T11:33:02Z) - Throughput-Optimal Topology Design for Cross-Silo Federated Learning [13.922754427601493]
Federated learning usually employs a client-server architecture where an orchestrator iteratively aggregates model updates from remote clients and pushes them back a refined model.
This approach may be inefficient in cross-silo settings, as close-by data silos with high-speed access links may exchange information faster than with the orchestrator.
We propose practical algorithms that find a topology with the largest throughput or with provable throughput guarantees.
arXiv Detail & Related papers (2020-10-23T08:28:29Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z) - Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of
Partitioned Edge Learning [73.82875010696849]
Machine learning algorithms are deployed at the network edge for training artificial intelligence (AI) models.
This paper focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation.
arXiv Detail & Related papers (2020-03-10T05:52:15Z) - Toward fast and accurate human pose estimation via soft-gated skip
connections [97.06882200076096]
This paper is on highly accurate and highly efficient human pose estimation.
We re-analyze this design choice in the context of improving both the accuracy and the efficiency over the state-of-the-art.
Our model achieves state-of-the-art results on the MPII and LSP datasets.
arXiv Detail & Related papers (2020-02-25T18:51:51Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.