A Novel Method Combines Moving Fronts, Data Decomposition and Deep
Learning to Forecast Intricate Time Series
- URL: http://arxiv.org/abs/2303.06394v1
- Date: Sat, 11 Mar 2023 12:07:26 GMT
- Title: A Novel Method Combines Moving Fronts, Data Decomposition and Deep
Learning to Forecast Intricate Time Series
- Authors: Debdarsan Niyogi
- Abstract summary: Indian Summer Monsoon Rainfall (ISMR) is a very complex time series.
Conventional one-time decomposition technique suffers from a leak of information from the future.
Moving Front (MF) method is proposed to prevent data leakage.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A univariate time series with high variability can pose a challenge even to
Deep Neural Network (DNN). To overcome this, a univariate time series is
decomposed into simpler constituent series, whose sum equals the original
series. As demonstrated in this article, the conventional one-time
decomposition technique suffers from a leak of information from the future,
referred to as a data leak. In this work, a novel Moving Front (MF) method is
proposed to prevent data leakage, so that the decomposed series can be treated
like other time series. Indian Summer Monsoon Rainfall (ISMR) is a very complex
time series, which poses a challenge to DNN and is therefore selected as an
example. From the many signal processing tools available, Empirical Wavelet
Transform (EWT) was chosen for decomposing the ISMR into simpler constituent
series, as it was found to be more effective than the other popular algorithm,
Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN).
The proposed MF method was used to generate the constituent leakage-free time
series. Predictions and forecasts were made by state-of-the-art Long and
Short-Term Memory (LSTM) network architecture, especially suitable for making
predictions of sequential patterns. The constituent MF series has been divided
into training, testing, and forecasting. It has been found that the model
(EWT-MF-LSTM) developed here made exceptionally good train and test
predictions, as well as Walk-Forward Validation (WFV), forecasts with
Performance Parameter ($PP$) values of 0.99, 0.86, and 0.95, respectively,
where $PP$ = 1.0 signifies perfect reproduction of the data.
Related papers
- Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - TimeCMA: Towards LLM-Empowered Time Series Forecasting via Cross-Modality Alignment [21.690191536424567]
TimeCMA is a framework for time series forecasting with cross-modality alignment.
Extensive experiments on real data offer insight into the accuracy and efficiency of the proposed framework.
arXiv Detail & Related papers (2024-06-03T00:27:29Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - A novel decomposed-ensemble time series forecasting framework: capturing
underlying volatility information [6.590038231008498]
We propose a novel time series forecasting paradigm that integrates decomposition with the capability to capture the underlying fluctuation information of the series.
Both the numerical data and the volatility information for each sub-mode are harnessed to train a neural network.
This network is adept at predicting the information of the sub-modes, and we aggregate the predictions of all sub-modes to generate the final output.
arXiv Detail & Related papers (2023-10-13T01:50:43Z) - CARD: Channel Aligned Robust Blend Transformer for Time Series
Forecasting [50.23240107430597]
We design a special Transformer, i.e., Channel Aligned Robust Blend Transformer (CARD for short), that addresses key shortcomings of CI type Transformer in time series forecasting.
First, CARD introduces a channel-aligned attention structure that allows it to capture both temporal correlations among signals.
Second, in order to efficiently utilize the multi-scale knowledge, we design a token blend module to generate tokens with different resolutions.
Third, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue.
arXiv Detail & Related papers (2023-05-20T05:16:31Z) - Explainable Parallel RCNN with Novel Feature Representation for Time
Series Forecasting [0.0]
Time series forecasting is a fundamental challenge in data science.
We develop a parallel deep learning framework composed of RNN and CNN.
Extensive experiments on three datasets reveal the effectiveness of our method.
arXiv Detail & Related papers (2023-05-08T17:20:13Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Time Series Forecasting via Learning Convolutionally Low-Rank Models [18.61160269442917]
Recently,citetliu:arxiv: 2019 studied the rather challenging problem of time series forecasting from the perspective of compressed sensing.
They proposed a no-learning method, named Convolution Nuclear Norm Minimization (CNNM), and proved that CNNM can exactly recover the future part of a series from its observed part.
This paper tries to approach the issues by integrating a learnable, orthonormal transformation into CNNM.
We prove that the resulted model, termed Learning-Based CNNM (LbCNNM), strictly succeeds in identifying the future part of a series
arXiv Detail & Related papers (2021-04-23T09:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.