FrAug: Frequency Domain Augmentation for Time Series Forecasting
- URL: http://arxiv.org/abs/2302.09292v1
- Date: Sat, 18 Feb 2023 11:25:42 GMT
- Title: FrAug: Frequency Domain Augmentation for Time Series Forecasting
- Authors: Muxi Chen, Zhijian Xu, Ailing Zeng, Qiang Xu
- Abstract summary: Data augmentation (DA) has become a de facto solution to expand training data size for deep learning.
This paper proposes simple yet effective frequency domain augmentation techniques that ensure the semantic consistency of augmented data-label pairs in forecasting.
Our results show that FrAug can boost the forecasting accuracy of TSF models in most cases.
- Score: 6.508992154478217
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data augmentation (DA) has become a de facto solution to expand training data
size for deep learning. With the proliferation of deep models for time series
analysis, various time series DA techniques are proposed in the literature,
e.g., cropping-, warping-, flipping-, and mixup-based methods. However, these
augmentation methods mainly apply to time series classification and anomaly
detection tasks. In time series forecasting (TSF), we need to model the
fine-grained temporal relationship within time series segments to generate
accurate forecasting results given data in a look-back window. Existing DA
solutions in the time domain would break such a relationship, leading to poor
forecasting accuracy. To tackle this problem, this paper proposes simple yet
effective frequency domain augmentation techniques that ensure the semantic
consistency of augmented data-label pairs in forecasting, named FrAug. We
conduct extensive experiments on eight widely-used benchmarks with several
state-of-the-art TSF deep models. Our results show that FrAug can boost the
forecasting accuracy of TSF models in most cases. Moreover, we show that FrAug
enables models trained with 1\% of the original training data to achieve
similar performance to the ones trained on full training data, which is
particularly attractive for cold-start forecasting. Finally, we show that
applying test-time training with FrAug greatly improves forecasting accuracy
for time series with significant distribution shifts, which often occurs in
real-life TSF applications. Our code is available at
https://anonymous.4open.science/r/Fraug-more-results-1785.
Related papers
- Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - DeformTime: Capturing Variable Dependencies with Deformable Attention for Time Series Forecasting [0.34530027457862006]
We present DeformTime, a neural network architecture that attempts to capture correlated temporal patterns from the input space.
We conduct extensive experiments on 6 MTS data sets, using previously established benchmarks as well as challenging infectious disease modelling tasks.
Results demonstrate that DeformTime improves accuracy against previous competitive methods across the vast majority of MTS forecasting tasks.
arXiv Detail & Related papers (2024-06-11T16:45:48Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Diverse Data Augmentation with Diffusions for Effective Test-time Prompt
Tuning [73.75282761503581]
We propose DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data.
Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13%.
arXiv Detail & Related papers (2023-08-11T09:36:31Z) - Mitigating Cold-start Forecasting using Cold Causal Demand Forecasting
Model [10.132124789018262]
We introduce the Cold Causal Demand Forecasting (CDF-cold) framework that integrates causal inference with deep learning-based models.
Our experiments demonstrate that the CDF-cold framework outperforms state-of-the-art forecasting models in predicting future values of multivariate time series data.
arXiv Detail & Related papers (2023-06-15T16:36:34Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Split Time Series into Patches: Rethinking Long-term Series Forecasting
with Dateformer [17.454822366228335]
Time is one of the most significant characteristics of time-series, yet has received insufficient attention.
We propose Dateformer who turns attention to modeling time instead of following the above practice.
Dateformer yields state-of-the-art accuracy with a 40% remarkable relative improvement, and broadens the maximum credible forecasting range to a half-yearly level.
arXiv Detail & Related papers (2022-07-12T08:58:44Z) - Meta-Forecasting by combining Global DeepRepresentations with Local
Adaptation [12.747008878068314]
We introduce a novel forecasting method called Meta Global-Local Auto-Regression (Meta-GLAR)
It adapts to each time series by learning in closed-form the mapping from the representations produced by a recurrent neural network (RNN) to one-step-ahead forecasts.
Our method is competitive with the state-of-the-art in out-of-sample forecasting accuracy reported in earlier work.
arXiv Detail & Related papers (2021-11-05T11:45:02Z) - Improving the Accuracy of Global Forecasting Models using Time Series
Data Augmentation [7.38079566297881]
Forecasting models that are trained across sets of many time series, known as Global Forecasting Models (GFM), have shown promising results in forecasting competitions and real-world applications.
We propose a novel, data augmentation based forecasting framework that is capable of improving the baseline accuracy of GFM models in less data-abundant settings.
arXiv Detail & Related papers (2020-08-06T13:52:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.