Deep Frequency Derivative Learning for Non-stationary Time Series Forecasting
- URL: http://arxiv.org/abs/2407.00502v1
- Date: Sat, 29 Jun 2024 17:56:59 GMT
- Title: Deep Frequency Derivative Learning for Non-stationary Time Series Forecasting
- Authors: Wei Fan, Kun Yi, Hangting Ye, Zhiyuan Ning, Qi Zhang, Ning An,
- Abstract summary: We present a deep frequency derivative learning framework, DERITS, for non-stationary time series forecasting.
Specifically, DERITS is built upon a novel reversible transformation, namely Frequency Derivative Transformation (FDT)
- Score: 12.989064148254936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While most time series are non-stationary, it is inevitable for models to face the distribution shift issue in time series forecasting. Existing solutions manipulate statistical measures (usually mean and std.) to adjust time series distribution. However, these operations can be theoretically seen as the transformation towards zero frequency component of the spectrum which cannot reveal full distribution information and would further lead to information utilization bottleneck in normalization, thus hindering forecasting performance. To address this problem, we propose to utilize the whole frequency spectrum to transform time series to make full use of data distribution from the frequency perspective. We present a deep frequency derivative learning framework, DERITS, for non-stationary time series forecasting. Specifically, DERITS is built upon a novel reversible transformation, namely Frequency Derivative Transformation (FDT) that makes signals derived in the frequency domain to acquire more stationary frequency representations. Then, we propose the Order-adaptive Fourier Convolution Network to conduct adaptive frequency filtering and learning. Furthermore, we organize DERITS as a parallel-stacked architecture for the multi-order derivation and fusion for forecasting. Finally, we conduct extensive experiments on several datasets which show the consistent superiority in both time series forecasting and shift alleviation.
Related papers
- FLEXtime: Filterbank learning for explaining time series [10.706092195673257]
We propose a new method for time series explainability called FLEXtime.
It uses a filterbank to split the time series into frequency bands and learns the optimal combinations of these bands.
Our evaluation shows that FLEXtime on average outperforms state-of-the-art explainability methods across a range of datasets.
arXiv Detail & Related papers (2024-11-06T15:06:42Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Not All Frequencies Are Created Equal:Towards a Dynamic Fusion of Frequencies in Time-Series Forecasting [9.615808695919647]
Time series forecasting methods should be flexible when applied to different scenarios.
We propose Frequency Dynamic Fusion (FreDF), which individually predicts each Fourier component, and dynamically fuses the output of different frequencies.
arXiv Detail & Related papers (2024-07-17T08:54:41Z) - ATFNet: Adaptive Time-Frequency Ensembled Network for Long-term Time Series Forecasting [7.694820760102176]
ATFNet is an innovative framework that combines a time domain module and a frequency domain module.
We introduce Dominant Harmonic Series Energy Weighting, a novel mechanism for adjusting the weights between the two modules.
Our Complex-valued Spectrum Attention mechanism offers a novel approach to discern the intricate relationships between different frequency combinations.
arXiv Detail & Related papers (2024-04-08T04:41:39Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - WFTNet: Exploiting Global and Local Periodicity in Long-term Time Series
Forecasting [61.64303388738395]
We propose a Wavelet-Fourier Transform Network (WFTNet) for long-term time series forecasting.
Tests on various time series datasets show WFTNet consistently outperforms other state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-20T13:44:18Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - FEDformer: Frequency Enhanced Decomposed Transformer for Long-term
Series Forecasting [23.199388386249215]
We propose to combine Transformer with the seasonal-trend decomposition method, in which the decomposition method captures the global profile of time series.
We exploit the fact that most time series tend to have a sparse representation in well-known basis such as Fourier transform.
Besides being more effective, the proposed method, termed as Frequency Enhanced Decomposed Transformer (bf FEDformer), is more efficient than standard Transformer.
arXiv Detail & Related papers (2022-01-30T06:24:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.