FreDF: Learning to Forecast in Frequency Domain
- URL: http://arxiv.org/abs/2402.02399v1
- Date: Sun, 4 Feb 2024 08:23:41 GMT
- Title: FreDF: Learning to Forecast in Frequency Domain
- Authors: Hao Wang, Licheng Pan, Zhichao Chen, Degui Yang, Sen Zhang, Yifei
Yang, Xinggao Liu, Haoxuan Li, Dacheng Tao
- Abstract summary: Time series modeling is uniquely challenged by the presence of autocorrelation in both historical and label sequences.
We introduce the Frequency-enhanced Direct Forecast (FreDF) which bypasses the complexity of label autocorrelation by learning to forecast in the frequency domain.
- Score: 56.24773675942897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series modeling is uniquely challenged by the presence of
autocorrelation in both historical and label sequences. Current research
predominantly focuses on handling autocorrelation within the historical
sequence but often neglects its presence in the label sequence. Specifically,
emerging forecast models mainly conform to the direct forecast (DF) paradigm,
generating multi-step forecasts under the assumption of conditional
independence within the label sequence. This assumption disregards the inherent
autocorrelation in the label sequence, thereby limiting the performance of
DF-based models. In response to this gap, we introduce the Frequency-enhanced
Direct Forecast (FreDF), which bypasses the complexity of label autocorrelation
by learning to forecast in the frequency domain. Our experiments demonstrate
that FreDF substantially outperforms existing state-of-the-art methods
including iTransformer and is compatible with a variety of forecast models.
Related papers
- FlexTSF: A Universal Forecasting Model for Time Series with Variable Regularities [17.164913785452367]
We propose FlexTSF, a universal time series forecasting model that possesses better generalization and supports both regular and irregular time series.
Experiments on 12 datasets show that FlexTSF outperforms state-of-the-art forecasting models respectively designed for regular and irregular time series.
arXiv Detail & Related papers (2024-10-30T16:14:09Z) - Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - DAM: Towards A Foundation Model for Time Series Forecasting [0.8231118867997028]
We propose a neural model that takes randomly sampled histories and outputs an adjustable basis composition as a continuous function of time.
It involves three key components: (1) a flexible approach for using randomly sampled histories from a long-tail distribution; (2) a transformer backbone that is trained on these actively sampled histories to produce, as representational output; and (3) the basis coefficients of a continuous function of time.
arXiv Detail & Related papers (2024-07-25T08:48:07Z) - Not All Frequencies Are Created Equal:Towards a Dynamic Fusion of Frequencies in Time-Series Forecasting [9.615808695919647]
Time series forecasting methods should be flexible when applied to different scenarios.
We propose Frequency Dynamic Fusion (FreDF), which individually predicts each Fourier component, and dynamically fuses the output of different frequencies.
arXiv Detail & Related papers (2024-07-17T08:54:41Z) - Non-autoregressive Conditional Diffusion Models for Time Series
Prediction [3.9722979176564763]
TimeDiff is a non-autoregressive diffusion model that achieves high-quality time series prediction.
We show that TimeDiff consistently outperforms existing time series diffusion models.
arXiv Detail & Related papers (2023-06-08T08:53:59Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - First De-Trend then Attend: Rethinking Attention for Time-Series
Forecasting [17.89566168289471]
We seek to understand the relationships between attention models in different time and frequency domains.
We propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition.
experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.
arXiv Detail & Related papers (2022-12-15T21:34:19Z) - Leveraging Instance Features for Label Aggregation in Programmatic Weak
Supervision [75.1860418333995]
Programmatic Weak Supervision (PWS) has emerged as a widespread paradigm to synthesize training labels efficiently.
The core component of PWS is the label model, which infers true labels by aggregating the outputs of multiple noisy supervision sources as labeling functions.
Existing statistical label models typically rely only on the outputs of LF, ignoring the instance features when modeling the underlying generative process.
arXiv Detail & Related papers (2022-10-06T07:28:53Z) - Complex Event Forecasting with Prediction Suffix Trees: Extended
Technical Report [70.7321040534471]
Complex Event Recognition (CER) systems have become popular in the past two decades due to their ability to "instantly" detect patterns on real-time streams of events.
There is a lack of methods for forecasting when a pattern might occur before such an occurrence is actually detected by a CER engine.
We present a formal framework that attempts to address the issue of Complex Event Forecasting.
arXiv Detail & Related papers (2021-09-01T09:52:31Z) - Model-Attentive Ensemble Learning for Sequence Modeling [86.4785354333566]
We present Model-Attentive Ensemble learning for Sequence modeling (MAES)
MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions.
We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
arXiv Detail & Related papers (2021-02-23T05:23:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.