TimeMar: Multi-Scale Autoregressive Modeling for Unconditional Time Series Generation
- URL: http://arxiv.org/abs/2601.11184v1
- Date: Fri, 16 Jan 2026 11:00:05 GMT
- Title: TimeMar: Multi-Scale Autoregressive Modeling for Unconditional Time Series Generation
- Authors: Xiangyu Xu, Qingsong Zhong, Jilin Hu,
- Abstract summary: We propose a structure-disentangled multiscale generation framework for time series.<n>Our approach encodes sequences into discrete tokens at multiple temporal resolutions.<n>We show that our approach produces higher-quality time series than existing methods.
- Score: 11.455232661227313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative modeling offers a promising solution to data scarcity and privacy challenges in time series analysis. However, the structural complexity of time series, characterized by multi-scale temporal patterns and heterogeneous components, remains insufficiently addressed. In this work, we propose a structure-disentangled multiscale generation framework for time series. Our approach encodes sequences into discrete tokens at multiple temporal resolutions and performs autoregressive generation in a coarse-to-fine manner, thereby preserving hierarchical dependencies. To tackle structural heterogeneity, we introduce a dual-path VQ-VAE that disentangles trend and seasonal components, enabling the learning of semantically consistent latent representations. Additionally, we present a guidance-based reconstruction strategy, where coarse seasonal signals are utilized as priors to guide the reconstruction of fine-grained seasonal patterns. Experiments on six datasets show that our approach produces higher-quality time series than existing methods. Notably, our model achieves strong performance with a significantly reduced parameter count and exhibits superior capability in generating high-quality long-term sequences. Our implementation is available at https://anonymous.4open.science/r/TimeMAR-BC5B.
Related papers
- Synthetic Time Series Generation via Complex Networks [39.146761527401424]
We present a framework for generating synthetic time series by leveraging complex networks mappings.<n>We investigate whether time series transformed into Quantile Graphs (QG) -- and then reconstructed via inverse mapping -- can produce synthetic data.<n>Results indicate that our quantile graph-based methodology offers a competitive and interpretable alternative for synthetic time series generation.
arXiv Detail & Related papers (2026-01-30T12:01:50Z) - TSGDiff: Rethinking Synthetic Time Series Generation from a Pure Graph Perspective [6.771711398105306]
Diffusion models have shown great promise in data generation, yet generating time series data remains challenging.<n>We present textitTSGDiff, a novel framework that rethinks time series generation from a graph-based perspective.<n>A graph neural network-based encoder-decoder architecture is employed to construct a latent space.
arXiv Detail & Related papers (2025-11-15T11:58:25Z) - TimeFlow: Towards Stochastic-Aware and Efficient Time Series Generation via Flow Matching Modeling [2.74279932215302]
Time series data has emerged as a critical research topic due to its broad utility in supporting downstream time series mining tasks.<n>We propose TimeFlow, a novel flow matching framework that integrates a encoder-only architecture.<n>Our model consistently outperforms strong baselines in generation quality, diversity, and efficiency.
arXiv Detail & Related papers (2025-11-11T08:28:26Z) - Uniform Discrete Diffusion with Metric Path for Video Generation [103.86033350602908]
Continuous-space video generation has advanced rapidly, while discrete approaches lag behind due to error accumulation and long-duration inconsistency.<n>We present Uniform generative modeling and present Uniform pAth (URSA), a powerful framework that bridges the gap with continuous approaches for scalable video generation.<n>URSA consistently outperforms existing discrete methods and achieves performance comparable to state-of-the-art continuous diffusion methods.
arXiv Detail & Related papers (2025-10-28T17:59:57Z) - Kairos: Towards Adaptive and Generalizable Time Series Foundation Models [27.076542021368056]
Time series foundation models (TSFMs) have emerged as a powerful paradigm for time series analysis.<n>We propose Kairos, a flexible TSFM framework that integrates a dynamic patching tokenizer and an instance-adaptive positional embedding.<n>Kairos achieves superior performance with much fewer parameters on two common zero-shot benchmarks.
arXiv Detail & Related papers (2025-09-30T06:02:26Z) - Time Series Generation Under Data Scarcity: A Unified Generative Modeling Approach [7.631288333466648]
We conduct the first large-scale study evaluating leading generative models in data-scarce settings.<n>We propose a unified diffusion-based generative framework that can synthesize high-fidelity time series using just a few examples.
arXiv Detail & Related papers (2025-05-26T18:39:04Z) - Generative Models for Long Time Series: Approximately Equivariant Recurrent Network Structures for an Adjusted Training Scheme [4.327763441385371]
We present a simple yet effective generative model for time series data based on a Variational Autoencoder (VAE) with recurrent layers.<n>Our method introduces an adapted training scheme that progressively increases the sequence length.<n>By leveraging the recurrent architecture, the model maintains a constant number of parameters regardless of sequence length.
arXiv Detail & Related papers (2025-05-08T07:52:37Z) - MFRS: A Multi-Frequency Reference Series Approach to Scalable and Accurate Time-Series Forecasting [51.94256702463408]
Time series predictability is derived from periodic characteristics at different frequencies.<n>We propose a novel time series forecasting method based on multi-frequency reference series correlation analysis.<n> Experiments on major open and synthetic datasets show state-of-the-art performance.
arXiv Detail & Related papers (2025-03-11T11:40:14Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a causal Transformer for unified time series forecasting.<n>Based on large-scale pre-training, Timer-XL achieves state-of-the-art zero-shot performance.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective [63.60312929416228]
textbftextitAttraos incorporates chaos theory into long-term time series forecasting.
We show that Attraos outperforms various LTSF methods on mainstream datasets and chaotic datasets with only one-twelfth of the parameters compared to PatchTST.
arXiv Detail & Related papers (2024-02-18T05:35:01Z) - Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting [46.63798583414426]
Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis.
Our study demonstrates, through both analytical and empirical evidence, that decomposition is key to containing excessive model inflation.
Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks.
arXiv Detail & Related papers (2024-01-22T13:15:40Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Towards Generating Real-World Time Series Data [52.51620668470388]
We propose a novel generative framework for time series data generation - RTSGAN.
RTSGAN learns an encoder-decoder module which provides a mapping between a time series instance and a fixed-dimension latent vector.
To generate time series with missing values, we further equip RTSGAN with an observation embedding layer and a decide-and-generate decoder.
arXiv Detail & Related papers (2021-11-16T11:31:37Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.