Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures
- URL: http://arxiv.org/abs/2404.05304v1
- Date: Mon, 8 Apr 2024 08:47:46 GMT
- Title: Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures
- Authors: Omran Ayoub, Davide Andreoletti, Aleksandra Knapińska, Róża Goścień, Piotr Lechowicz, Tiziano Leidi, Silvia Giordano, Cristina Rottondi, Krzysztof Walkowiak,
- Abstract summary: Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
- Score: 37.66676003679306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adapting to concept drift is a challenging task in machine learning, which is usually tackled using incremental learning techniques that periodically re-fit a learning model leveraging newly available data. A primary limitation of these techniques is their reliance on substantial amounts of data for retraining. The necessity of acquiring fresh data introduces temporal delays prior to retraining, potentially rendering the models inaccurate if a sudden concept drift occurs in-between two consecutive retrainings. In communication networks, such issue emerges when performing traffic forecasting following a~failure event: post-failure re-routing may induce a drastic shift in distribution and pattern of traffic data, thus requiring a timely model adaptation. In this work, we address this challenge for the problem of traffic forecasting and propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining. Through extensive simulations of failure scenarios, we compare the predictive performance of our proposed approach to that of a reference method based on incremental learning. Experimental results show that our proposed approach outperforms incremental learning-based methods in situations where the shifts in traffic patterns are drastic.
Related papers
- Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset [98.52916361979503]
We introduce a novel learning approach that automatically models and adapts to non-stationarity.
We show empirically that our approach performs well in non-stationary supervised and off-policy reinforcement learning settings.
arXiv Detail & Related papers (2024-11-06T16:32:40Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - EANet: Expert Attention Network for Online Trajectory Prediction [5.600280639034753]
Expert Attention Network is a complete online learning framework for trajectory prediction.
We introduce expert attention, which adjusts the weights of different depths of network layers, avoiding the model updated slowly due to gradient problem.
Furthermore, we propose a short-term motion trend kernel function which is sensitive to scenario change, allowing the model to respond quickly.
arXiv Detail & Related papers (2023-09-11T07:09:40Z) - NodeTrans: A Graph Transfer Learning Approach for Traffic Prediction [33.299309349152146]
We propose a novel transfer learning approach to solve the traffic prediction with few data.
First, a spatial-temporal graph neural network is proposed, which can capture the node-specific spatial-temporal traffic patterns of different road networks.
arXiv Detail & Related papers (2022-07-04T10:06:20Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - Data-Driven Traffic Assignment: A Novel Approach for Learning Traffic
Flow Patterns Using a Graph Convolutional Neural Network [1.3706331473063877]
We present a novel data-driven approach of learning traffic flow patterns of a transportation network.
We develop a neural network-based framework known as Graph Convolutional Neural Network (GCNN) to solve it.
When the training of the model is complete, it can instantly determine the traffic flows of a large-scale network.
arXiv Detail & Related papers (2022-02-21T19:45:15Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Learning to Transfer for Traffic Forecasting via Multi-task Learning [3.1836399559127218]
Deep neural networks have demonstrated superior performance in short-term traffic forecasting.
Traffic4cast is the first of its kind dedicated to assume the robustness of traffic forecasting models towards domain shifts in space and time.
We present a multi-task learning framework for temporal andtemporal domain adaptation of traffic forecasting models.
arXiv Detail & Related papers (2021-11-27T03:16:40Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - New Perspectives on the Use of Online Learning for Congestion Level
Prediction over Traffic Data [6.664111208927475]
This work focuses on classification over time series data.
When a time series is generated by non-stationary phenomena, the pattern relating the series with the class to be predicted may evolve over time.
Online learning methods incrementally learn from new data samples arriving over time, and accommodate eventual changes along the data stream.
arXiv Detail & Related papers (2020-03-27T09:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.