Towards Diverse and Coherent Augmentation for Time-Series Forecasting
- URL: http://arxiv.org/abs/2303.14254v1
- Date: Fri, 24 Mar 2023 19:40:34 GMT
- Title: Towards Diverse and Coherent Augmentation for Time-Series Forecasting
- Authors: Xiyuan Zhang, Ranak Roy Chowdhury, Jingbo Shang, Rajesh Gupta, Dezhi
Hong
- Abstract summary: Time-series data augmentations mitigate the issue of insufficient training data for deep learning models.
We propose to combine Spectral and Time Augmentation for generating more diverse and coherent samples.
Experiments on five real-world time-series datasets demonstrate that STAug outperforms the base models without data augmentation.
- Score: 22.213927377926804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-series data augmentation mitigates the issue of insufficient training
data for deep learning models. Yet, existing augmentation methods are mainly
designed for classification, where class labels can be preserved even if
augmentation alters the temporal dynamics. We note that augmentation designed
for forecasting requires diversity as well as coherence with the original
temporal dynamics. As time-series data generated by real-life physical
processes exhibit characteristics in both the time and frequency domains, we
propose to combine Spectral and Time Augmentation (STAug) for generating more
diverse and coherent samples. Specifically, in the frequency domain, we use the
Empirical Mode Decomposition to decompose a time series and reassemble the
subcomponents with random weights. This way, we generate diverse samples while
being coherent with the original temporal relationships as they contain the
same set of base components. In the time domain, we adapt a mix-up strategy
that generates diverse as well as linearly in-between coherent samples.
Experiments on five real-world time-series datasets demonstrate that STAug
outperforms the base models without data augmentation as well as
state-of-the-art augmentation methods.
Related papers
- Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - TimeDiT: General-purpose Diffusion Transformers for Time Series Foundation Model [11.281386703572842]
A family of models have been developed, utilizing a temporal auto-regressive generative Transformer architecture.
TimeDiT is a general foundation model for time series that employs a denoising diffusion paradigm instead of temporal auto-regressive generation.
Extensive experiments conducted on a varity of tasks such as forecasting, imputation, and anomaly detection, demonstrate the effectiveness of TimeDiT.
arXiv Detail & Related papers (2024-09-03T22:31:57Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting [24.834846119163885]
We propose a novel framework, TEMPO, that can effectively learn time series representations.
TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains.
arXiv Detail & Related papers (2023-10-08T00:02:25Z) - Learning Gaussian Mixture Representations for Tensor Time Series
Forecasting [8.31607451942671]
We develop a novel TTS forecasting framework, which seeks to individually model each heterogeneity component implied in the time, the location, and the source variables.
Experiment results on two real-world TTS datasets verify the superiority of our approach compared with the state-of-the-art baselines.
arXiv Detail & Related papers (2023-06-01T06:50:47Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Pay Attention to Evolution: Time Series Forecasting with Deep
Graph-Evolution Learning [33.79957892029931]
This work presents a novel neural network architecture for time-series forecasting.
We named our method Recurrent Graph Evolution Neural Network (ReGENN)
An extensive set of experiments was conducted comparing ReGENN with dozens of ensemble methods and classical statistical ones.
arXiv Detail & Related papers (2020-08-28T20:10:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.