New Perspectives on the Use of Online Learning for Congestion Level
Prediction over Traffic Data
- URL: http://arxiv.org/abs/2003.14304v1
- Date: Fri, 27 Mar 2020 09:44:57 GMT
- Title: New Perspectives on the Use of Online Learning for Congestion Level
Prediction over Traffic Data
- Authors: Eric L. Manibardo, Ibai La\~na, Jesus L. Lobo and Javier Del Ser
- Abstract summary: This work focuses on classification over time series data.
When a time series is generated by non-stationary phenomena, the pattern relating the series with the class to be predicted may evolve over time.
Online learning methods incrementally learn from new data samples arriving over time, and accommodate eventual changes along the data stream.
- Score: 6.664111208927475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work focuses on classification over time series data. When a time series
is generated by non-stationary phenomena, the pattern relating the series with
the class to be predicted may evolve over time (concept drift). Consequently,
predictive models aimed to learn this pattern may become eventually obsolete,
hence failing to sustain performance levels of practical use. To overcome this
model degradation, online learning methods incrementally learn from new data
samples arriving over time, and accommodate eventual changes along the data
stream by implementing assorted concept drift strategies. In this manuscript we
elaborate on the suitability of online learning methods to predict the road
congestion level based on traffic speed time series data. We draw interesting
insights on the performance degradation when the forecasting horizon is
increased. As opposed to what is done in most literature, we provide evidence
of the importance of assessing the distribution of classes over time before
designing and tuning the learning model. This previous exercise may give a hint
of the predictability of the different congestion levels under target.
Experimental results are discussed over real traffic speed data captured by
inductive loops deployed over Seattle (USA). Several online learning methods
are analyzed, from traditional incremental learning algorithms to more
elaborated deep learning models. As shown by the reported results, when
increasing the prediction horizon, the performance of all models degrade
severely due to the distribution of classes along time, which supports our
claim about the importance of analyzing this distribution prior to the design
of the model.
Related papers
- Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures [37.66676003679306]
Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
arXiv Detail & Related papers (2024-04-08T08:47:46Z) - Online Test-Time Adaptation of Spatial-Temporal Traffic Flow Forecasting [13.770733370640565]
This paper conducts the first study of the online test-time adaptation techniques for spatial-temporal traffic flow forecasting problems.
We propose an Adaptive Double Correction by Series Decomposition (ADCSD) method, which first decomposes the output of the trained model into seasonal and trend-cyclical parts.
In the proposed ADCSD method, instead of fine-tuning the whole trained model during the testing phase, a lite network is attached after the trained model, and only the lite network is fine-tuned in the testing process each time a data entry is observed.
arXiv Detail & Related papers (2024-01-08T12:04:39Z) - STDA-Meta: A Meta-Learning Framework for Few-Shot Traffic Prediction [5.502177196766933]
We propose a novel-temporal domain adaptation (STDA) method that learns transferable meta-knowledge from data-sufficient cities in an adversarial manner.
This learned meta-knowledge can improve prediction performance of data-scarce cities.
Specifically, we train the STDA model using a Model-Atemporal Meta-Learning (MAML) based episode learning process.
arXiv Detail & Related papers (2023-10-31T06:52:56Z) - EANet: Expert Attention Network for Online Trajectory Prediction [5.600280639034753]
Expert Attention Network is a complete online learning framework for trajectory prediction.
We introduce expert attention, which adjusts the weights of different depths of network layers, avoiding the model updated slowly due to gradient problem.
Furthermore, we propose a short-term motion trend kernel function which is sensitive to scenario change, allowing the model to respond quickly.
arXiv Detail & Related papers (2023-09-11T07:09:40Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Learning to Jump: Thinning and Thickening Latent Counts for Generative
Modeling [69.60713300418467]
Learning to jump is a general recipe for generative modeling of various types of data.
We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better.
arXiv Detail & Related papers (2023-05-28T05:38:28Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Probabilistic prediction of the heave motions of a semi-submersible by a
deep learning problem model [4.903969235471705]
We extend a deep learning (DL) model to predict the heave and surge motions of a floating semi-submersible 20 to 50 seconds ahead with good accuracy.
This study extends the understanding of the DL model to predict the wave excited motions of an offshore platform.
arXiv Detail & Related papers (2021-10-09T06:26:42Z) - Churn Reduction via Distillation [54.5952282395487]
We show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn.
We then show that distillation performs strongly for low churn training against a number of recent baselines.
arXiv Detail & Related papers (2021-06-04T18:03:31Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.