Synthetic Time Series Forecasting with Transformer Architectures: Extensive Simulation Benchmarks
- URL: http://arxiv.org/abs/2505.20048v1
- Date: Mon, 26 May 2025 14:34:05 GMT
- Title: Synthetic Time Series Forecasting with Transformer Architectures: Extensive Simulation Benchmarks
- Authors: Ali Forootani, Mohammad Khosravi,
- Abstract summary: Time series forecasting plays a critical role in domains such as energy, finance, and healthcare.<n>Autoformer, Informer, and Patchtst-each evaluated through three architectural variants.<n>Koopman-enhanced Transformer framework, Deep Koopformer, integrates operator-theoretic latent state modeling.
- Score: 1.03590082373586
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Time series forecasting plays a critical role in domains such as energy, finance, and healthcare, where accurate predictions inform decision-making under uncertainty. Although Transformer-based models have demonstrated success in sequential modeling, their adoption for time series remains limited by challenges such as noise sensitivity, long-range dependencies, and a lack of inductive bias for temporal structure. In this work, we present a unified and principled framework for benchmarking three prominent Transformer forecasting architectures-Autoformer, Informer, and Patchtst-each evaluated through three architectural variants: Minimal, Standard, and Full, representing increasing levels of complexity and modeling capacity. We conduct over 1500 controlled experiments on a suite of ten synthetic signals, spanning five patch lengths and five forecast horizons under both clean and noisy conditions. Our analysis reveals consistent patterns across model families. To advance this landscape further, we introduce the Koopman-enhanced Transformer framework, Deep Koopformer, which integrates operator-theoretic latent state modeling to improve stability and interpretability. We demonstrate its efficacy on nonlinear and chaotic dynamical systems. Our results highlight Koopman based Transformer as a promising hybrid approach for robust, interpretable, and theoretically grounded time series forecasting in noisy and complex real-world conditions.
Related papers
- DeepKoopFormer: A Koopman Enhanced Transformer Based Architecture for Time Series Forecasting [0.9217021281095907]
Time series forecasting plays a vital role across scientific, industrial, and environmental domains.<n>DeepKoopFormer is a principled forecasting framework that combines the representational power of Transformers with the theoretical rigor of Koopman operator theory.<n>Our model features a modular encoder-propagator-decoder structure, where temporal dynamics are learned via a spectrally constrained, linear Koopman operator in a latent space.
arXiv Detail & Related papers (2025-08-04T17:05:55Z) - Triple Attention Transformer Architecture for Time-Dependent Concrete Creep Prediction [0.0]
This paper presents a novel Triple Attention Transformer Architecture for predicting time-dependent concrete creep.<n>By transforming concrete creep prediction into an autoregressive sequence modeling task similar to language processing, our architecture leverages the transformer's self-attention mechanisms.<n>The architecture achieves exceptional performance with mean absolute percentage error of 1.63% and R2 values of 0.999 across all datasets.
arXiv Detail & Related papers (2025-05-28T22:30:35Z) - Powerformer: A Transformer with Weighted Causal Attention for Time-series Forecasting [50.298817606660826]
We introduce Powerformer, a novel Transformer variant that replaces noncausal attention weights with causal weights that are reweighted according to a smooth heavy-tailed decay.<n>Our empirical results demonstrate that Powerformer achieves state-of-the-art accuracy on public time-series benchmarks.<n>Our analyses show that the model's locality bias is amplified during training, demonstrating an interplay between time-series data and power-law-based attention.
arXiv Detail & Related papers (2025-02-10T04:42:11Z) - Ister: Inverted Seasonal-Trend Decomposition Transformer for Explainable Multivariate Time Series Forecasting [10.32586981170693]
Inverted Seasonal-Trend Decomposition Transformer (Ister)<n>We introduce a novel Dot-attention mechanism that improves interpretability, computational efficiency, and predictive accuracy.<n>Ister enables intuitive visualization of component contributions, shedding lights on model's decision process and enhancing transparency in prediction results.
arXiv Detail & Related papers (2024-12-25T06:37:19Z) - A Comparative Study of Pruning Methods in Transformer-based Time Series Forecasting [0.07916635054977067]
Pruning is an established approach to reduce neural network parameter count and save compute.<n>We study the effects of these pruning strategies on model predictive performance and computational aspects like model size, operations, and inference time.<n>We demonstrate that even with corresponding hardware and software support, structured pruning is unable to provide significant time savings.
arXiv Detail & Related papers (2024-12-17T13:07:31Z) - Two Steps Forward and One Behind: Rethinking Time Series Forecasting
with Deep Learning [7.967995669387532]
The Transformer is a highly successful deep learning model that has revolutionised the world of artificial neural networks.
We investigate the effectiveness of Transformer-based models applied to the domain of time series forecasting.
We propose a set of alternative models that are better performing and significantly less complex.
arXiv Detail & Related papers (2023-04-10T12:47:42Z) - Continuous Spatiotemporal Transformers [2.485182034310304]
We present the Continuous Stemporal Transformer (CST), a new transformer architecture that is designed to modeling continuous systems.
This new framework guarantees a continuous representation and output via optimization in Sobolev space.
We benchmark CST against traditional transformers as well as other smoothtemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems.
arXiv Detail & Related papers (2023-01-31T00:06:56Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Autoformer: Decomposition Transformers with Auto-Correlation for
Long-Term Series Forecasting [68.86835407617778]
Autoformer is a novel decomposition architecture with an Auto-Correlation mechanism.
In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a relative improvement on six benchmarks.
arXiv Detail & Related papers (2021-06-24T13:43:43Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z) - Consistency Guided Scene Flow Estimation [159.24395181068218]
CGSF is a self-supervised framework for the joint reconstruction of 3D scene structure and motion from stereo video.
We show that the proposed model can reliably predict disparity and scene flow in challenging imagery.
It achieves better generalization than the state-of-the-art, and adapts quickly and robustly to unseen domains.
arXiv Detail & Related papers (2020-06-19T17:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.