Disentangling Long-Short Term State Under Unknown Interventions for Online Time Series Forecasting
- URL: http://arxiv.org/abs/2502.12603v1
- Date: Tue, 18 Feb 2025 07:31:04 GMT
- Title: Disentangling Long-Short Term State Under Unknown Interventions for Online Time Series Forecasting
- Authors: Ruichu Cai, Haiqin Huang, Zhifang Jiang, Zijian Li, Changze Zhou, Yuequn Liu, Yuming Liu, Zhifeng Hao,
- Abstract summary: Current methods for time series forecasting struggle in the online scenario.
It is difficult to preserve long-term dependency while adapting short-term changes when data are arriving sequentially.
We propose a general framework to disentangle long/short-term states for online time series forecasting.
- Score: 21.059809961310123
- License:
- Abstract: Current methods for time series forecasting struggle in the online scenario, since it is difficult to preserve long-term dependency while adapting short-term changes when data are arriving sequentially. Although some recent methods solve this problem by controlling the updates of latent states, they cannot disentangle the long/short-term states, leading to the inability to effectively adapt to nonstationary. To tackle this challenge, we propose a general framework to disentangle long/short-term states for online time series forecasting. Our idea is inspired by the observations where short-term changes can be led by unknown interventions like abrupt policies in the stock market. Based on this insight, we formalize a data generation process with unknown interventions on short-term states. Under mild assumptions, we further leverage the independence of short-term states led by unknown interventions to establish the identification theory to achieve the disentanglement of long/short-term states. Built on this theory, we develop a long short-term disentanglement model (LSTD) to extract the long/short-term states with long/short-term encoders, respectively. Furthermore, the LSTD model incorporates a smooth constraint to preserve the long-term dependencies and an interrupted dependency constraint to enforce the forgetting of short-term dependencies, together boosting the disentanglement of long/short-term states. Experimental results on several benchmark datasets show that our \textbf{LSTD} model outperforms existing methods for online time series forecasting, validating its efficacy in real-world applications.
Related papers
- Breaking the Context Bottleneck on Long Time Series Forecasting [6.36010639533526]
Long-term time-series forecasting is essential for planning and decision-making in economics, energy, and transportation.
Recent advancements have enhanced the efficiency of these models, but the challenge of effectively leveraging longer sequences persists.
We propose the Logsparse Decomposable Multiscaling (LDM) framework for the efficient and effective processing of long sequences.
arXiv Detail & Related papers (2024-12-21T10:29:34Z) - KEDformer:Knowledge Extraction Seasonal Trend Decomposition for Long-term Sequence Prediction [1.224954637705144]
Time series forecasting is a critical task in domains such as energy, finance, and meteorology.
We propose KEDformer, a knowledge extraction-driven framework that integrates seasonal-trend decomposition.
This decomposition enhances the model's ability to capture both short-term fluctuations and long-term patterns.
arXiv Detail & Related papers (2024-12-06T21:07:11Z) - TimeBridge: Non-Stationarity Matters for Long-term Time Series Forecasting [49.6208017412376]
TimeBridge is a novel framework designed to bridge the gap between non-stationarity and dependency modeling.
TimeBridge consistently achieves state-of-the-art performance in both short-term and long-term forecasting.
arXiv Detail & Related papers (2024-10-06T10:41:03Z) - Multiscale Representation Enhanced Temporal Flow Fusion Model for Long-Term Workload Forecasting [19.426131129034115]
This paper proposes a novel framework leveraging self-supervised multiscale representation learning to capture both long-term and near-term workload patterns.
The long-term history is encoded through multiscale representations while the near-term observations are modeled via temporal flow fusion.
arXiv Detail & Related papers (2024-07-29T04:42:18Z) - Non-autoregressive Sequence-to-Sequence Vision-Language Models [63.77614880533488]
We propose a parallel decoding sequence-to-sequence vision-language model that marginalizes over multiple inference paths in the decoder.
The model achieves performance on-par with its state-of-the-art autoregressive counterpart, but is faster at inference time.
arXiv Detail & Related papers (2024-03-04T17:34:59Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - Financial Time Series Forecasting using CNN and Transformer [16.57996422431636]
We propose to harness the power of CNNs and Transformers to model both short-term and long-term dependencies within a time series.
In our experiments, we demonstrated the success of the proposed method in comparison to commonly adopted statistical and deep learning methods.
arXiv Detail & Related papers (2023-04-11T00:56:57Z) - Grouped self-attention mechanism for a memory-efficient Transformer [64.0125322353281]
Real-world tasks such as forecasting weather, electricity consumption, and stock market involve predicting data that vary over time.
Time-series data are generally recorded over a long period of observation with long sequences owing to their periodic characteristics and long-range dependencies over time.
We propose two novel modules, Grouped Self-Attention (GSA) and Compressed Cross-Attention (CCA)
Our proposed model efficiently exhibited reduced computational complexity and performance comparable to or better than existing methods.
arXiv Detail & Related papers (2022-10-02T06:58:49Z) - A Novel Long-term Iterative Mining Scheme for Video Salient Object
Detection [54.53335983750033]
Short-term methodology conflicts with the real mechanism of our visual system.
This paper proposes a novel VSOD approach, which performs VSOD in a complete long-term way.
The proposed approach outperforms almost all SOTA models on five widely used benchmark datasets.
arXiv Detail & Related papers (2022-06-20T04:27:47Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.