Structural Knowledge Informed Continual Multivariate Time Series
Forecasting
- URL: http://arxiv.org/abs/2402.12722v1
- Date: Tue, 20 Feb 2024 05:11:20 GMT
- Title: Structural Knowledge Informed Continual Multivariate Time Series
Forecasting
- Authors: Zijie Pan, Yushan Jiang, Dongjin Song, Sahil Garg, Kashif Rasul,
Anderson Schneider, Yuriy Nevmyvaka
- Abstract summary: We propose a novel Structural Knowledge Informed Continual Learning (SKI-CL) framework to perform MTS forecasting within a continual learning paradigm.
Specifically, we develop a forecasting model based on graph structure learning, where a consistency regularization scheme is imposed between the learned variable dependencies and the structural knowledge.
We develop a representation-matching memory replay scheme that maximizes the temporal coverage of MTS data to efficiently preserve the underlying temporal dynamics and dependency structures of each regime.
- Score: 23.18105409644709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies in multivariate time series (MTS) forecasting reveal that
explicitly modeling the hidden dependencies among different time series can
yield promising forecasting performance and reliable explanations. However,
modeling variable dependencies remains underexplored when MTS is continuously
accumulated under different regimes (stages). Due to the potential distribution
and dependency disparities, the underlying model may encounter the catastrophic
forgetting problem, i.e., it is challenging to memorize and infer different
types of variable dependencies across different regimes while maintaining
forecasting performance. To address this issue, we propose a novel Structural
Knowledge Informed Continual Learning (SKI-CL) framework to perform MTS
forecasting within a continual learning paradigm, which leverages structural
knowledge to steer the forecasting model toward identifying and adapting to
different regimes, and selects representative MTS samples from each regime for
memory replay. Specifically, we develop a forecasting model based on graph
structure learning, where a consistency regularization scheme is imposed
between the learned variable dependencies and the structural knowledge while
optimizing the forecasting objective over the MTS data. As such, MTS
representations learned in each regime are associated with distinct structural
knowledge, which helps the model memorize a variety of conceivable scenarios
and results in accurate forecasts in the continual learning context. Meanwhile,
we develop a representation-matching memory replay scheme that maximizes the
temporal coverage of MTS data to efficiently preserve the underlying temporal
dynamics and dependency structures of each regime. Thorough empirical studies
on synthetic and real-world benchmarks validate SKI-CL's efficacy and
advantages over the state-of-the-art for continual MTS forecasting tasks.
Related papers
- Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning [11.19088022423885]
We propose a novel MoST learning framework via Self-Supervised Learning, namely MoSSL.
Results on two real-world MoST datasets verify the superiority of our approach compared with the state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-06T08:24:06Z) - Causal Temporal Regime Structure Learning [49.77103348208835]
We introduce a new optimization-based method (linear) that concurrently learns the Directed Acyclic Graph (DAG) for each regime.
We conduct extensive experiments and show that our method consistently outperforms causal discovery models across various settings.
arXiv Detail & Related papers (2023-11-02T17:26:49Z) - Exploring Progress in Multivariate Time Series Forecasting:
Comprehensive Benchmarking and Heterogeneity Analysis [72.18987459587682]
We introduce BasicTS, a benchmark designed for fair comparisons in MTS forecasting.
We highlight the heterogeneity among MTS datasets and classify them based on temporal and spatial characteristics.
arXiv Detail & Related papers (2023-10-09T19:52:22Z) - TimeTuner: Diagnosing Time Representations for Time-Series Forecasting
with Counterfactual Explanations [3.8357850372472915]
This paper contributes a novel visual analytics framework, namely TimeTuner, to help analysts understand how model behaviors are associated with localized, stationarity, and correlations of time-series representations.
We show that TimeTuner can help characterize time-series representations and guide the feature engineering processes.
arXiv Detail & Related papers (2023-07-19T11:40:15Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Disentangling Structured Components: Towards Adaptive, Interpretable and
Scalable Time Series Forecasting [52.47493322446537]
We develop a adaptive, interpretable and scalable forecasting framework, which seeks to individually model each component of the spatial-temporal patterns.
SCNN works with a pre-defined generative process of MTS, which arithmetically characterizes the latent structure of the spatial-temporal patterns.
Extensive experiments are conducted to demonstrate that SCNN can achieve superior performance over state-of-the-art models on three real-world datasets.
arXiv Detail & Related papers (2023-05-22T13:39:44Z) - The Capacity and Robustness Trade-off: Revisiting the Channel
Independent Strategy for Multivariate Time Series Forecasting [50.48888534815361]
We show that models trained with the Channel Independent (CI) strategy outperform those trained with the Channel Dependent (CD) strategy.
Our results conclude that the CD approach has higher capacity but often lacks robustness to accurately predict distributionally drifted time series.
We propose a modified CD method called Predict Residuals with Regularization (PRReg) that can surpass the CI strategy.
arXiv Detail & Related papers (2023-04-11T13:15:33Z) - Episodic Memory for Learning Subjective-Timescale Models [1.933681537640272]
In model-based learning, an agent's model is commonly defined over transitions between consecutive states of an environment.
In contrast, intelligent behaviour in biological organisms is characterised by the ability to plan over varying temporal scales depending on the context.
We devise a novel approach to learning a transition dynamics model, based on the sequences of episodic memories that define the agent's subjective timescale.
arXiv Detail & Related papers (2020-10-03T21:55:40Z) - MTHetGNN: A Heterogeneous Graph Embedding Framework for Multivariate
Time Series Forecasting [4.8274015390665195]
We propose a novel end-to-end deep learning model, termed Multivariate Time Series Forecasting via Heterogeneous Graph Neural Networks (MTHetGNN)
To characterize complex relations among variables, a relation embedding module is designed in MTHetGNN, where each variable is regarded as a graph node.
A temporal embedding module is introduced for time series features extraction, where involving convolutional neural network (CNN) filters with different perception scales.
arXiv Detail & Related papers (2020-08-19T18:21:22Z) - Meta-learning framework with applications to zero-shot time-series
forecasting [82.61728230984099]
This work provides positive evidence using a broad meta-learning framework.
residual connections act as a meta-learning adaptation mechanism.
We show that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining.
arXiv Detail & Related papers (2020-02-07T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.