Leveraging Non-Decimated Wavelet Packet Features and Transformer Models
for Time Series Forecasting
- URL: http://arxiv.org/abs/2403.08630v1
- Date: Wed, 13 Mar 2024 15:45:29 GMT
- Title: Leveraging Non-Decimated Wavelet Packet Features and Transformer Models
for Time Series Forecasting
- Authors: Guy P Nason and James L. Wei
- Abstract summary: We consider the use of Daubechies wavelets with different numbers of vanishing moments as input features to both non-temporal and temporal forecasting methods.
We evaluate the use of these wavelet features on a significantly wider set of forecasting methods than previous studies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article combines wavelet analysis techniques with machine learning
methods for univariate time series forecasting, focusing on three main
contributions. Firstly, we consider the use of Daubechies wavelets with
different numbers of vanishing moments as input features to both non-temporal
and temporal forecasting methods, by selecting these numbers during the
cross-validation phase. Secondly, we compare the use of both the non-decimated
wavelet transform and the non-decimated wavelet packet transform for computing
these features, the latter providing a much larger set of potentially useful
coefficient vectors. The wavelet coefficients are computed using a shifted
version of the typical pyramidal algorithm to ensure no leakage of future
information into these inputs. Thirdly, we evaluate the use of these wavelet
features on a significantly wider set of forecasting methods than previous
studies, including both temporal and non-temporal models, and both statistical
and deep learning-based methods. The latter include state-of-the-art
transformer-based neural network architectures. Our experiments suggest
significant benefit in replacing higher-order lagged features with wavelet
features across all examined non-temporal methods for one-step-forward
forecasting, and modest benefit when used as inputs for temporal deep
learning-based models for long-horizon forecasting.
Related papers
- Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Wave-Mask/Mix: Exploring Wavelet-Based Augmentations for Time Series Forecasting [0.0]
This research introduces two augmentation approaches using the discrete wavelet transform (DWT) to adjust frequency elements while preserving temporal dependencies in time series data.
To the best of our knowledge, this is the first study to conduct extensive experiments on multivariate time series using DWT.
arXiv Detail & Related papers (2024-08-20T15:42:10Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Stecformer: Spatio-temporal Encoding Cascaded Transformer for
Multivariate Long-term Time Series Forecasting [11.021398675773055]
We propose a complete solution to address problems in terms of feature extraction and target prediction.
For extraction, we design an efficient-temporal encoding extractor including a semi-adaptive graph to acquire sufficient-temporal information.
For prediction, we propose a Cascaded De Predictor (CDP) to strengthen the correlation between different intervals.
arXiv Detail & Related papers (2023-05-25T13:00:46Z) - CARD: Channel Aligned Robust Blend Transformer for Time Series
Forecasting [50.23240107430597]
We design a special Transformer, i.e., Channel Aligned Robust Blend Transformer (CARD for short), that addresses key shortcomings of CI type Transformer in time series forecasting.
First, CARD introduces a channel-aligned attention structure that allows it to capture both temporal correlations among signals.
Second, in order to efficiently utilize the multi-scale knowledge, we design a token blend module to generate tokens with different resolutions.
Third, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue.
arXiv Detail & Related papers (2023-05-20T05:16:31Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Conformal Prediction Bands for Two-Dimensional Functional Time Series [0.0]
Time evolving surfaces can be modeled as two-dimensional Functional time series, exploiting the tools of Functional data analysis.
The main focus revolves around Conformal Prediction, a versatile non-parametric paradigm used to quantify uncertainty in prediction problems.
A probabilistic forecasting scheme for two-dimensional functional time series is presented, while providing an extension of Functional Autoregressive Processes of order one to this setting.
arXiv Detail & Related papers (2022-07-27T17:23:14Z) - A Differential Attention Fusion Model Based on Transformer for Time
Series Forecasting [4.666618110838523]
Time series forecasting is widely used in the fields of equipment life cycle forecasting, weather forecasting, traffic flow forecasting, and other fields.
Some scholars have tried to apply Transformer to time series forecasting because of its powerful parallel training ability.
The existing Transformer methods do not pay enough attention to the small time segments that play a decisive role in prediction.
arXiv Detail & Related papers (2022-02-23T10:33:12Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Benchmarking Deep Learning Interpretability in Time Series Predictions [41.13847656750174]
Saliency methods are used extensively to highlight the importance of input features in model predictions.
We set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures.
arXiv Detail & Related papers (2020-10-26T22:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.