Applying Graph-based Deep Learning To Realistic Network Scenarios
- URL: http://arxiv.org/abs/2010.06686v2
- Date: Mon, 15 Mar 2021 21:07:25 GMT
- Title: Applying Graph-based Deep Learning To Realistic Network Scenarios
- Authors: Miquel Ferriol-Galm\'es and Jos\'e Su\'arez-Varela and Pere Barlet-Ros
and Albert Cabellos-Aparicio
- Abstract summary: This paper presents a new Graph-based deep learning model able to estimate accurately the per-path mean delay in networks.
The proposed model can generalize successfully over topologies, routing configurations, queue scheduling policies and traffic matrices unseen during the training phase.
- Score: 5.453745629140304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Machine Learning (ML) have shown a great potential to
build data-driven solutions for a plethora of network-related problems. In this
context, building fast and accurate network models is essential to achieve
functional optimization tools for networking. However, state-of-the-art
ML-based techniques for network modelling are not able to provide accurate
estimates of important performance metrics such as delay or jitter in realistic
network scenarios with sophisticated queue scheduling configurations. This
paper presents a new Graph-based deep learning model able to estimate
accurately the per-path mean delay in networks. The proposed model can
generalize successfully over topologies, routing configurations, queue
scheduling policies and traffic matrices unseen during the training phase.
Related papers
- Towards a graph-based foundation model for network traffic analysis [3.0558245652654907]
Foundation models can grasp the complexities of network traffic dynamics and adapt to any specific task or environment with minimal fine-tuning.
Previous approaches have used tokenized hex-level packet data.
We propose a new, efficient graph-based alternative at the flow-level.
arXiv Detail & Related papers (2024-09-12T15:04:34Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Building a Graph-based Deep Learning network model from captured traffic
traces [4.671648049111933]
State of the art network models are based or depend on Discrete Event Simulation (DES)
DES is highly accurate, it is also computationally costly and cumbersome to parallelize, making it unpractical to simulate high performance networks.
We propose a Graph Neural Network (GNN)-based solution specifically designed to better capture the complexities of real network scenarios.
arXiv Detail & Related papers (2023-10-18T11:16:32Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - RouteNet-Fermi: Network Modeling with Graph Neural Networks [7.227467283378366]
We present RouteNet-Fermi, a custom Graph Neural Networks (GNN) model that shares the same goals as Queuing Theory.
The proposed model predicts accurately the delay, jitter, and packet loss of a network.
Our experimental results show that RouteNet-Fermi achieves similar accuracy as computationally-expensive packet-level simulators.
arXiv Detail & Related papers (2022-12-22T23:02:40Z) - Open World Learning Graph Convolution for Latency Estimation in Routing Networks [16.228327606985257]
We propose a novel approach for modeling network routing, using Graph Neural Networks.
Our model shares a stable performance across different network sizes and configurations of routing networks, while at the same time being able to extrapolate towards unseen sizes, configurations, and user behavior.
We show that our model outperforms most conventional deep-learning-based models, in terms of prediction accuracy, computational resources, inference speed, as well as ability to generalize towards open-world input.
arXiv Detail & Related papers (2022-07-08T19:26:40Z) - RouteNet-Erlang: A Graph Neural Network for Network Performance
Evaluation [5.56275556529722]
We present emphRouteNet-Erlang, a pioneering GNN architecture designed to model computer networks.
RouteNet-Erlang supports complex traffic models, multi-queue scheduling policies, routing policies and can provide accurate estimates.
We benchmark RouteNet-Erlang against a state-of-the-art QT model, and our results show that it outperforms QT in all the network scenarios.
arXiv Detail & Related papers (2022-02-28T17:09:53Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.