Learning to Transfer for Traffic Forecasting via Multi-task Learning
- URL: http://arxiv.org/abs/2111.15542v1
- Date: Sat, 27 Nov 2021 03:16:40 GMT
- Title: Learning to Transfer for Traffic Forecasting via Multi-task Learning
- Authors: Yichao Lu
- Abstract summary: Deep neural networks have demonstrated superior performance in short-term traffic forecasting.
Traffic4cast is the first of its kind dedicated to assume the robustness of traffic forecasting models towards domain shifts in space and time.
We present a multi-task learning framework for temporal andtemporal domain adaptation of traffic forecasting models.
- Score: 3.1836399559127218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have demonstrated superior performance in short-term
traffic forecasting. However, most existing traffic forecasting systems assume
that the training and testing data are drawn from the same underlying
distribution, which limits their practical applicability. The NeurIPS 2021
Traffic4cast challenge is the first of its kind dedicated to benchmarking the
robustness of traffic forecasting models towards domain shifts in space and
time. This technical report describes our solution to this challenge. In
particular, we present a multi-task learning framework for temporal and
spatio-temporal domain adaptation of traffic forecasting models. Experimental
results demonstrate that our multi-task learning approach achieves strong
empirical performance, outperforming a number of baseline domain adaptation
methods, while remaining highly efficient. The source code for this technical
report is available at https://github.com/YichaoLu/Traffic4cast2021.
Related papers
- Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures [37.66676003679306]
Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
arXiv Detail & Related papers (2024-04-08T08:47:46Z) - Optimal transfer protocol by incremental layer defrosting [66.76153955485584]
Transfer learning is a powerful tool enabling model training with limited amounts of data.
The simplest transfer learning protocol is based on freezing" the feature-extractor layers of a network pre-trained on a data-rich source task.
We show that this protocol is often sub-optimal and the largest performance gain may be achieved when smaller portions of the pre-trained network are kept frozen.
arXiv Detail & Related papers (2023-03-02T17:32:11Z) - PDFormer: Propagation Delay-Aware Dynamic Long-Range Transformer for
Traffic Flow Prediction [78.05103666987655]
spatial-temporal Graph Neural Network (GNN) models have emerged as one of the most promising methods to solve this problem.
We propose a novel propagation delay-aware dynamic long-range transFormer, namely PDFormer, for accurate traffic flow prediction.
Our method can not only achieve state-of-the-art performance but also exhibit competitive computational efficiency.
arXiv Detail & Related papers (2023-01-19T08:42:40Z) - Transfer Learning Based Efficient Traffic Prediction with Limited
Training Data [3.689539481706835]
Efficient prediction of internet traffic is an essential part of Self Organizing Network (SON) for ensuring proactive management.
Deep sequence model in network traffic prediction with limited training data has not been studied extensively in the current works.
We investigated and evaluated the performance of the deep transfer learning technique in traffic prediction with inadequate historical data.
arXiv Detail & Related papers (2022-05-09T14:44:39Z) - Adaptive Trajectory Prediction via Transferable GNN [74.09424229172781]
We propose a novel Transferable Graph Neural Network (T-GNN) framework, which jointly conducts trajectory prediction as well as domain alignment in a unified framework.
Specifically, a domain invariant GNN is proposed to explore the structural motion knowledge where the domain specific knowledge is reduced.
An attention-based adaptive knowledge learning module is further proposed to explore fine-grained individual-level feature representation for knowledge transfer.
arXiv Detail & Related papers (2022-03-09T21:08:47Z) - Domain Adversarial Spatial-Temporal Network: A Transferable Framework
for Short-term Traffic Forecasting across Cities [9.891703123090528]
We propose a novel transferable traffic forecasting framework: Adversarial Spatial-Temporal Network (DASTNet)
DASTNet is pre-trained on multiple source networks and fine-tuned with the target network's traffic data.
It consistently outperforms all state-of-the-art baseline methods on three benchmark datasets.
arXiv Detail & Related papers (2022-02-08T03:58:39Z) - Omni-Training for Data-Efficient Deep Learning [80.28715182095975]
Recent advances reveal that a properly pre-trained model endows an important property: transferability.
A tight combination of pre-training and meta-training cannot achieve both kinds of transferability.
This motivates the proposed Omni-Training framework towards data-efficient deep learning.
arXiv Detail & Related papers (2021-10-14T16:30:36Z) - Regularizing Generative Adversarial Networks under Limited Data [88.57330330305535]
This work proposes a regularization approach for training robust GAN models on limited data.
We show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.
arXiv Detail & Related papers (2021-04-07T17:59:06Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z) - Traffic congestion anomaly detection and prediction using deep learning [6.370406399003785]
Congestion prediction is a major priority for traffic management centres around the world to ensure timely incident response handling.
The increasing amounts of generated traffic data have been used to train machine learning predictors for traffic, but this is a challenging task due to inter-dependencies of traffic flow both in time and space.
We show that our deep learning models consistently outperform traditional methods, and we conduct a comparative analysis of the optimal time horizon of historical data required to predict traffic flow at different time points in the future.
arXiv Detail & Related papers (2020-06-23T08:49:46Z) - Deep Echo State Networks for Short-Term Traffic Forecasting: Performance
Comparison and Statistical Assessment [8.586891288891263]
In short-term traffic forecasting, the goal is to accurately predict future values of a traffic parameter of interest.
Deep Echo State Networks achieve more accurate traffic forecasts than the rest of considered modeling counterparts.
arXiv Detail & Related papers (2020-04-17T11:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.