Transfer Learning Based Efficient Traffic Prediction with Limited
Training Data
- URL: http://arxiv.org/abs/2205.04344v1
- Date: Mon, 9 May 2022 14:44:39 GMT
- Title: Transfer Learning Based Efficient Traffic Prediction with Limited
Training Data
- Authors: Sajal Saha, Anwar Haque, and Greg Sidebottom
- Abstract summary: Efficient prediction of internet traffic is an essential part of Self Organizing Network (SON) for ensuring proactive management.
Deep sequence model in network traffic prediction with limited training data has not been studied extensively in the current works.
We investigated and evaluated the performance of the deep transfer learning technique in traffic prediction with inadequate historical data.
- Score: 3.689539481706835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient prediction of internet traffic is an essential part of Self
Organizing Network (SON) for ensuring proactive management. There are many
existing solutions for internet traffic prediction with higher accuracy using
deep learning. But designing individual predictive models for each service
provider in the network is challenging due to data heterogeneity, scarcity, and
abnormality. Moreover, the performance of the deep sequence model in network
traffic prediction with limited training data has not been studied extensively
in the current works. In this paper, we investigated and evaluated the
performance of the deep transfer learning technique in traffic prediction with
inadequate historical data leveraging the knowledge of our pre-trained model.
First, we used a comparatively larger real-world traffic dataset for source
domain prediction based on five different deep sequence models: Recurrent
Neural Network (RNN), Long Short-Term Memory (LSTM), LSTM Encoder-Decoder
(LSTM_En_De), LSTM_En_De with Attention layer (LSTM_En_De_Atn), and Gated
Recurrent Unit (GRU). Then, two best-performing models, LSTM_En_De and
LSTM_En_De_Atn, from the source domain with an accuracy of 96.06% and 96.05%
are considered for the target domain prediction. Finally, four smaller traffic
datasets collected for four particular sources and destination pairs are used
in the target domain to compare the performance of the standard learning and
transfer learning in terms of accuracy and execution time. According to our
experimental result, transfer learning helps to reduce the execution time for
most cases, while the model's accuracy is improved in transfer learning with a
larger training session.
Related papers
- Multi-Scale Convolutional LSTM with Transfer Learning for Anomaly Detection in Cellular Networks [1.1432909951914676]
This study introduces a novel approach Multi-Scale Convolutional LSTM with Transfer Learning (TL) to detect anomalies in cellular networks.
The model is initially trained from scratch using a publicly available dataset to learn typical network behavior.
We compare the performance of the model trained from scratch with that of the fine-tuned model using TL.
arXiv Detail & Related papers (2024-09-30T17:51:54Z) - Overcoming Data Limitations in Internet Traffic Forecasting: LSTM Models with Transfer Learning and Wavelet Augmentation [1.9662978733004601]
Effective internet traffic prediction in smaller ISP networks is challenged by limited data availability.
This paper explores this issue using transfer learning and data augmentation techniques with two LSTM-based models, LSTMSeq2Seq and LSTMSeq2SeqAtn.
The datasets represent real internet traffic telemetry, offering insights into diverse traffic patterns across different network domains.
arXiv Detail & Related papers (2024-09-20T03:18:20Z) - TPLLM: A Traffic Prediction Framework Based on Pretrained Large Language Models [27.306180426294784]
We introduce TPLLM, a novel traffic prediction framework leveraging Large Language Models (LLMs)
In this framework, we construct a sequence embedding layer based on Conal Neural Networks (LoCNNs) and a graph embedding layer based on Graph Contemporalal Networks (GCNs) to extract sequence features and spatial features.
Experiments on two real-world datasets demonstrate commendable performance in both full-sample and few-shot prediction scenarios.
arXiv Detail & Related papers (2024-03-04T17:08:57Z) - Predicting Traffic Flow with Federated Learning and Graph Neural with Asynchronous Computations Network [0.0]
We present a novel deep-learning method called Federated Learning and Asynchronous Graph Convolutional Networks (FLAGCN)
Our framework incorporates the principles of asynchronous graph convolutional networks with federated learning to enhance accuracy and efficiency of real-time traffic flow prediction.
arXiv Detail & Related papers (2024-01-05T09:36:42Z) - Optimal transfer protocol by incremental layer defrosting [66.76153955485584]
Transfer learning is a powerful tool enabling model training with limited amounts of data.
The simplest transfer learning protocol is based on freezing" the feature-extractor layers of a network pre-trained on a data-rich source task.
We show that this protocol is often sub-optimal and the largest performance gain may be achieved when smaller portions of the pre-trained network are kept frozen.
arXiv Detail & Related papers (2023-03-02T17:32:11Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Learning to Transfer for Traffic Forecasting via Multi-task Learning [3.1836399559127218]
Deep neural networks have demonstrated superior performance in short-term traffic forecasting.
Traffic4cast is the first of its kind dedicated to assume the robustness of traffic forecasting models towards domain shifts in space and time.
We present a multi-task learning framework for temporal andtemporal domain adaptation of traffic forecasting models.
arXiv Detail & Related papers (2021-11-27T03:16:40Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.