DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting
- URL: http://arxiv.org/abs/2203.07681v1
- Date: Tue, 15 Mar 2022 06:51:58 GMT
- Title: DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting
- Authors: Wei Fan, Shun Zheng, Xiaohan Yi, Wei Cao, Yanjie Fu, Jiang Bian,
Tie-Yan Liu
- Abstract summary: We introduce a deep expansion learning framework, DEPTS, for PTS forecasting.
DEPTS starts with a decoupled formulation by introducing the periodic state as a hidden variable.
Our two customized modules also have certain interpretable capabilities, such as attributing the forecasts to either local momenta or global periodicity.
- Score: 83.60876685008225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Periodic time series (PTS) forecasting plays a crucial role in a variety of
industries to foster critical tasks, such as early warning, pre-planning,
resource scheduling, etc. However, the complicated dependencies of the PTS
signal on its inherent periodicity as well as the sophisticated composition of
various periods hinder the performance of PTS forecasting. In this paper, we
introduce a deep expansion learning framework, DEPTS, for PTS forecasting.
DEPTS starts with a decoupled formulation by introducing the periodic state as
a hidden variable, which stimulates us to make two dedicated modules to tackle
the aforementioned two challenges. First, we develop an expansion module on top
of residual learning to perform a layer-by-layer expansion of those complicated
dependencies. Second, we introduce a periodicity module with a parameterized
periodic function that holds sufficient capacity to capture diversified
periods. Moreover, our two customized modules also have certain interpretable
capabilities, such as attributing the forecasts to either local momenta or
global periodicity and characterizing certain core periodic properties, e.g.,
amplitudes and frequencies. Extensive experiments on both synthetic data and
real-world data demonstrate the effectiveness of DEPTS on handling PTS. In most
cases, DEPTS achieves significant improvements over the best baseline.
Specifically, the error reduction can even reach up to 20% for a few cases.
Finally, all codes are publicly available.
Related papers
- FAITH: Frequency-domain Attention In Two Horizons for Time Series Forecasting [13.253624747448935]
Time Series Forecasting plays a crucial role in various fields such as industrial equipment maintenance, meteorology, energy consumption, traffic flow and financial investment.
Current deep learning-based predictive models often exhibit a significant deviation between their forecasting outcomes and the ground truth.
We propose a novel model Frequency-domain Attention In Two Horizons, which decomposes time series into trend and seasonal components.
arXiv Detail & Related papers (2024-05-22T02:37:02Z) - Structural Knowledge Informed Continual Multivariate Time Series
Forecasting [23.18105409644709]
We propose a novel Structural Knowledge Informed Continual Learning (SKI-CL) framework to perform MTS forecasting within a continual learning paradigm.
Specifically, we develop a forecasting model based on graph structure learning, where a consistency regularization scheme is imposed between the learned variable dependencies and the structural knowledge.
We develop a representation-matching memory replay scheme that maximizes the temporal coverage of MTS data to efficiently preserve the underlying temporal dynamics and dependency structures of each regime.
arXiv Detail & Related papers (2024-02-20T05:11:20Z) - Cumulative Distribution Function based General Temporal Point Processes [49.758080415846884]
CuFun model represents a novel approach to TPPs that revolves around the Cumulative Distribution Function (CDF)
Our approach addresses several critical issues inherent in traditional TPP modeling.
Our contributions encompass the introduction of a pioneering CDF-based TPP model, the development of a methodology for incorporating past event information into future event prediction.
arXiv Detail & Related papers (2024-02-01T07:21:30Z) - FCDNet: Frequency-Guided Complementary Dependency Modeling for
Multivariate Time-Series Forecasting [9.083469629116784]
We propose FCDNet, a concise yet effective framework for time-series forecasting.
It helps extract long- and short-term dependency information adaptively from multi-level frequency patterns.
Experiments show that FCDNet significantly exceeds strong baselines.
arXiv Detail & Related papers (2023-12-27T07:29:52Z) - Frequency-domain MLPs are More Effective Learners in Time Series
Forecasting [67.60443290781988]
Time series forecasting has played the key role in different industrial domains, including finance, traffic, energy, and healthcare.
Most-based forecasting methods suffer from the point-wise mappings and information bottleneck.
We propose FreTS, a simple yet effective architecture built upon Frequency-domains for Time Series forecasting.
arXiv Detail & Related papers (2023-11-10T17:05:13Z) - MTS-Mixers: Multivariate Time Series Forecasting via Factorized Temporal
and Channel Mixing [18.058617044421293]
This paper investigates the contributions and deficiencies of attention mechanisms on the performance of time series forecasting.
We propose MTS-Mixers, which use two factorized modules to capture temporal and channel dependencies.
Experimental results on several real-world datasets show that MTS-Mixers outperform existing Transformer-based models with higher efficiency.
arXiv Detail & Related papers (2023-02-09T08:52:49Z) - Grouped self-attention mechanism for a memory-efficient Transformer [64.0125322353281]
Real-world tasks such as forecasting weather, electricity consumption, and stock market involve predicting data that vary over time.
Time-series data are generally recorded over a long period of observation with long sequences owing to their periodic characteristics and long-range dependencies over time.
We propose two novel modules, Grouped Self-Attention (GSA) and Compressed Cross-Attention (CCA)
Our proposed model efficiently exhibited reduced computational complexity and performance comparable to or better than existing methods.
arXiv Detail & Related papers (2022-10-02T06:58:49Z) - Unsupervised Time-Series Representation Learning with Iterative Bilinear
Temporal-Spectral Fusion [6.154427471704388]
We propose a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF)
Specifically, we utilize the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies.
We devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.
arXiv Detail & Related papers (2022-02-08T14:04:08Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.