Generative Models for Long Time Series: Approximately Equivariant Recurrent Network Structures for an Adjusted Training Scheme
- URL: http://arxiv.org/abs/2505.05020v1
- Date: Thu, 08 May 2025 07:52:37 GMT
- Title: Generative Models for Long Time Series: Approximately Equivariant Recurrent Network Structures for an Adjusted Training Scheme
- Authors: Ruwen Fulek, Markus Lange-Hegermann,
- Abstract summary: We present a simple yet effective generative model for time series data based on a Variational Autoencoder (VAE) with recurrent layers.<n>Our method introduces an adapted training scheme that progressively increases the sequence length.<n>By leveraging the recurrent architecture, the model maintains a constant number of parameters regardless of sequence length.
- Score: 4.327763441385371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a simple yet effective generative model for time series data based on a Variational Autoencoder (VAE) with recurrent layers, referred to as the Recurrent Variational Autoencoder with Subsequent Training (RVAE-ST). Our method introduces an adapted training scheme that progressively increases the sequence length, addressing the challenge recurrent layers typically face when modeling long sequences. By leveraging the recurrent architecture, the model maintains a constant number of parameters regardless of sequence length. This design encourages approximate time-shift equivariance and enables efficient modeling of long-range temporal dependencies. Rather than introducing a fundamentally new architecture, we show that a carefully composed combination of known components can match or outperform state-of-the-art generative models on several benchmark datasets. Our model performs particularly well on time series that exhibit quasi-periodic structure,while remaining competitive on datasets with more irregular or partially non-stationary behavior. We evaluate its performance using ELBO, Fr\'echet Distance, discriminative scores, and visualizations of the learned embeddings.
Related papers
- Tailored Architectures for Time Series Forecasting: Evaluating Deep Learning Models on Gaussian Process-Generated Data [0.5573267589690007]
Research aims at uncovering clear connections between time series characteristics and particular models.<n>We present TimeFlex, a new model that incorporates a modular architecture tailored to handle diverse temporal dynamics.<n>This model is compared to current state-of-the-art models, offering a deeper understanding of how models perform under varied time series conditions.
arXiv Detail & Related papers (2025-06-10T16:46:02Z) - Time Tracker: Mixture-of-Experts-Enhanced Foundation Time Series Forecasting Model with Decoupled Training Pipelines [5.543238821368548]
Time series often exhibit significant diversity in their temporal patterns across different time spans and domains.<n>Time Tracker achieves state-of-the-art performance in predicting accuracy, model generalization and adaptability.
arXiv Detail & Related papers (2025-05-21T06:18:41Z) - AverageTime: Enhance Long-Term Time Series Forecasting with Simple Averaging [6.125620036017928]
Long-term time series forecasting focuses on leveraging historical data to predict future trends.<n>The core challenge lies in effectively modeling dependencies both within sequences and channels.<n>Our research proposes a new approach for capturing sequence and channel dependencies: AverageTime.
arXiv Detail & Related papers (2024-12-30T05:56:25Z) - A Temporal Linear Network for Time Series Forecasting [0.0]
We introduce the Temporal Linear Net (TLN), that extends the capabilities of linear models while maintaining interpretability and computational efficiency.
Our approach is a variant of TSMixer that maintains strict linearity throughout its architecture.
A key innovation of TLN is its ability to compute an equivalent linear model, offering a level of interpretability not found in more complex architectures such as TSMixer.
arXiv Detail & Related papers (2024-10-28T18:51:19Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting [46.63798583414426]
Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis.
Our study demonstrates, through both analytical and empirical evidence, that decomposition is key to containing excessive model inflation.
Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks.
arXiv Detail & Related papers (2024-01-22T13:15:40Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [59.125047512495456]
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.
We show that $tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures.
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - DCSF: Deep Convolutional Set Functions for Classification of
Asynchronous Time Series [5.339109578928972]
Asynchronous Time Series is a time series where all the channels are observed asynchronously-independently.
This paper proposes a novel framework, that is highly scalable and memory efficient, for the asynchronous time series classification task.
We explore convolutional neural networks, which are well researched for the closely related problem-classification of regularly sampled and fully observed time series.
arXiv Detail & Related papers (2022-08-24T08:47:36Z) - Deep Explicit Duration Switching Models for Time Series [84.33678003781908]
We propose a flexible model that is capable of identifying both state- and time-dependent switching dynamics.
State-dependent switching is enabled by a recurrent state-to-switch connection.
An explicit duration count variable is used to improve the time-dependent switching behavior.
arXiv Detail & Related papers (2021-10-26T17:35:21Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.