An Attention Free Long Short-Term Memory for Time Series Forecasting
- URL: http://arxiv.org/abs/2209.09548v1
- Date: Tue, 20 Sep 2022 08:23:49 GMT
- Title: An Attention Free Long Short-Term Memory for Time Series Forecasting
- Authors: Hugo Inzirillo and Ludovic De Villelongue
- Abstract summary: We focused on time series forecasting using attention free mechanism, a more efficient framework, and proposed a new architecture for time series prediction.
We proposed an architecture built using attention free LSTM layers that overcome linear models for conditional variance prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning is playing an increasingly important role in time series
analysis. We focused on time series forecasting using attention free mechanism,
a more efficient framework, and proposed a new architecture for time series
prediction for which linear models seem to be unable to capture the time
dependence. We proposed an architecture built using attention free LSTM layers
that overcome linear models for conditional variance prediction. Our findings
confirm the validity of our model, which also allowed to improve the prediction
capacity of a LSTM, while improving the efficiency of the learning task.
Related papers
- Local Attention Mechanism: Boosting the Transformer Architecture for Long-Sequence Time Series Forecasting [8.841114905151152]
Local Attention Mechanism (LAM) is an efficient attention mechanism tailored for time series analysis.
LAM exploits the continuity properties of time series to reduce the number of attention scores computed.
We present an algorithm for implementing LAM in algebra tensor that runs in time and memory O(nlogn)
arXiv Detail & Related papers (2024-10-04T11:32:02Z) - Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain Modality Forecasting [5.354055742467354]
Low-Rank Adaptation (LoRA) is a technique for fine-tuning large pre-trained or foundational models across different modalities and tasks.
This paper examines the impact of LoRA on contemporary time series foundational models: Lag-Llama, MOIRAI, and Chronos.
arXiv Detail & Related papers (2024-05-16T16:05:33Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Bootstrap Motion Forecasting With Self-Consistent Constraints [52.88100002373369]
We present a novel framework to bootstrap Motion forecasting with Self-consistent Constraints.
The motion forecasting task aims at predicting future trajectories of vehicles by incorporating spatial and temporal information from the past.
We show that our proposed scheme consistently improves the prediction performance of several existing methods.
arXiv Detail & Related papers (2022-04-12T14:59:48Z) - Time Series Forecasting with Ensembled Stochastic Differential Equations
Driven by L\'evy Noise [2.3076895420652965]
We use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series.
Our contributions are, first, we use the phase space reconstruction method to extract intrinsic dimension of the time series data.
Second, we explore SDEs driven by $alpha$-stable L'evy motion to model the time series data and solve the problem through neural network approximation.
arXiv Detail & Related papers (2021-11-25T16:49:01Z) - Deep Autoregressive Models with Spectral Attention [74.08846528440024]
We propose a forecasting architecture that combines deep autoregressive models with a Spectral Attention (SA) module.
By characterizing in the spectral domain the embedding of the time series as occurrences of a random process, our method can identify global trends and seasonality patterns.
Two spectral attention models, global and local to the time series, integrate this information within the forecast and perform spectral filtering to remove time series's noise.
arXiv Detail & Related papers (2021-07-13T11:08:47Z) - Model-Attentive Ensemble Learning for Sequence Modeling [86.4785354333566]
We present Model-Attentive Ensemble learning for Sequence modeling (MAES)
MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions.
We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
arXiv Detail & Related papers (2021-02-23T05:23:35Z) - Difference Attention Based Error Correction LSTM Model for Time Series
Prediction [3.7990471017645855]
We propose a novel model for time series prediction in which difference-attention LSTM model and error-correction LSTM model are respectively employed and combined in a cascade way.
With additional difference features and new principle learning framework, our model can improve the prediction accuracy in time series.
arXiv Detail & Related papers (2020-03-30T16:48:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.