Cross-Frequency Time Series Meta-Forecasting
- URL: http://arxiv.org/abs/2302.02077v1
- Date: Sat, 4 Feb 2023 03:22:16 GMT
- Title: Cross-Frequency Time Series Meta-Forecasting
- Authors: Mike Van Ness, Huibin Shen, Hao Wang, Xiaoyong Jin, Danielle C.
Maddix, Karthick Gopalswamy
- Abstract summary: We introduce the Continuous Frequency Adapter (CFA), specifically designed to learn frequency-invariant representations.
CFA greatly improves performance when generalizing to unseen frequencies, providing a first step towards forecasting over larger multi-frequency datasets.
- Score: 7.809667883159047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-forecasting is a newly emerging field which combines meta-learning and
time series forecasting. The goal of meta-forecasting is to train over a
collection of source time series and generalize to new time series
one-at-a-time. Previous approaches in meta-forecasting achieve competitive
performance, but with the restriction of training a separate model for each
sampling frequency. In this work, we investigate meta-forecasting over
different sampling frequencies, and introduce a new model, the Continuous
Frequency Adapter (CFA), specifically designed to learn frequency-invariant
representations. We find that CFA greatly improves performance when
generalizing to unseen frequencies, providing a first step towards forecasting
over larger multi-frequency datasets.
Related papers
- Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Not All Frequencies Are Created Equal:Towards a Dynamic Fusion of Frequencies in Time-Series Forecasting [9.615808695919647]
Time series forecasting methods should be flexible when applied to different scenarios.
We propose Frequency Dynamic Fusion (FreDF), which individually predicts each Fourier component, and dynamically fuses the output of different frequencies.
arXiv Detail & Related papers (2024-07-17T08:54:41Z) - Deep Frequency Derivative Learning for Non-stationary Time Series Forecasting [12.989064148254936]
We present a deep frequency derivative learning framework, DERITS, for non-stationary time series forecasting.
Specifically, DERITS is built upon a novel reversible transformation, namely Frequency Derivative Transformation (FDT)
arXiv Detail & Related papers (2024-06-29T17:56:59Z) - MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - FreDF: Learning to Forecast in Frequency Domain [56.24773675942897]
Time series modeling is uniquely challenged by the presence of autocorrelation in both historical and label sequences.
We introduce the Frequency-enhanced Direct Forecast (FreDF) which bypasses the complexity of label autocorrelation by learning to forecast in the frequency domain.
arXiv Detail & Related papers (2024-02-04T08:23:41Z) - Meta-Forecasting by combining Global DeepRepresentations with Local
Adaptation [12.747008878068314]
We introduce a novel forecasting method called Meta Global-Local Auto-Regression (Meta-GLAR)
It adapts to each time series by learning in closed-form the mapping from the representations produced by a recurrent neural network (RNN) to one-step-ahead forecasts.
Our method is competitive with the state-of-the-art in out-of-sample forecasting accuracy reported in earlier work.
arXiv Detail & Related papers (2021-11-05T11:45:02Z) - Deep Autoregressive Models with Spectral Attention [74.08846528440024]
We propose a forecasting architecture that combines deep autoregressive models with a Spectral Attention (SA) module.
By characterizing in the spectral domain the embedding of the time series as occurrences of a random process, our method can identify global trends and seasonality patterns.
Two spectral attention models, global and local to the time series, integrate this information within the forecast and perform spectral filtering to remove time series's noise.
arXiv Detail & Related papers (2021-07-13T11:08:47Z) - Improving the Accuracy of Global Forecasting Models using Time Series
Data Augmentation [7.38079566297881]
Forecasting models that are trained across sets of many time series, known as Global Forecasting Models (GFM), have shown promising results in forecasting competitions and real-world applications.
We propose a novel, data augmentation based forecasting framework that is capable of improving the baseline accuracy of GFM models in less data-abundant settings.
arXiv Detail & Related papers (2020-08-06T13:52:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.