DiM-TS: Bridge the Gap between Selective State Space Models and Time Series for Generative Modeling
- URL: http://arxiv.org/abs/2511.18312v1
- Date: Sun, 23 Nov 2025 06:48:03 GMT
- Title: DiM-TS: Bridge the Gap between Selective State Space Models and Time Series for Generative Modeling
- Authors: Zihao Yao, Jiankai Zuo, Yaying Zhang,
- Abstract summary: Time series data plays a pivotal role in a wide variety of fields but faces challenges related to privacy concerns.<n>We propose Lag Fusion Mamba and Permutation Scanning Mamba, which enhance the model's ability to discern significant patterns during the denoising process.<n>We also introduce Diffusion Mamba for Time Series (DiM-TS), a high-quality time series generation model that better preserves the temporal periodicity and inter-channel correlations.
- Score: 11.836475971106125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series data plays a pivotal role in a wide variety of fields but faces challenges related to privacy concerns. Recently, synthesizing data via diffusion models is viewed as a promising solution. However, existing methods still struggle to capture long-range temporal dependencies and complex channel interrelations. In this research, we aim to utilize the sequence modeling capability of a State Space Model called Mamba to extend its applicability to time series data generation. We firstly analyze the core limitations in State Space Model, namely the lack of consideration for correlated temporal lag and channel permutation. Building upon the insight, we propose Lag Fusion Mamba and Permutation Scanning Mamba, which enhance the model's ability to discern significant patterns during the denoising process. Theoretical analysis reveals that both variants exhibit a unified matrix multiplication framework with the original Mamba, offering a deeper understanding of our method. Finally, we integrate two variants and introduce Diffusion Mamba for Time Series (DiM-TS), a high-quality time series generation model that better preserves the temporal periodicity and inter-channel correlations. Comprehensive experiments on public datasets demonstrate the superiority of DiM-TS in generating realistic time series while preserving diverse properties of data.
Related papers
- DiTS: Multimodal Diffusion Transformers Are Time Series Forecasters [50.43534351968113]
Existing generative time series models do not address the multi-dimensional properties of time series data well.<n>Inspired by Multimodal Diffusion Transformers that integrate textual guidance into video generation, we propose Diffusion Transformers for Time Series (DiTS)
arXiv Detail & Related papers (2026-02-06T10:48:13Z) - TSkel-Mamba: Temporal Dynamic Modeling via State Space Model for Human Skeleton-based Action Recognition [59.99922360648663]
TSkel-Mamba is a hybrid Transformer-Mamba framework that effectively captures both spatial and temporal dynamics.<n>The MTI module employs multi-scale Cycle operators to capture cross-channel temporal interactions, a critical factor in action recognition.
arXiv Detail & Related papers (2025-12-12T11:55:16Z) - Time-Correlated Video Bridge Matching [49.94768097995648]
Time-Correlated Video Bridge Matching (TCVBM) is a framework that extends Bridge Matching (BM) to time-correlated data sequences in the video domain.<n>TCVBM achieves superior performance across multiple quantitative metrics, demonstrating enhanced generation quality and reconstruction fidelity.
arXiv Detail & Related papers (2025-10-14T12:35:30Z) - CHIME: Conditional Hallucination and Integrated Multi-scale Enhancement for Time Series Diffusion Model [6.893910847034251]
CHIME is a conditional hallucination and integrated multi-scale enhancement framework for time series diffusion models.<n>It captures decomposed features of time series, achieving in-domain distribution alignment between generated and original samples.<n>In addition, we introduce a feature hallucination module in the conditional denoising process, enabling the temporal features transfer across long-time scales.
arXiv Detail & Related papers (2025-06-04T02:34:09Z) - Time Tracker: Mixture-of-Experts-Enhanced Foundation Time Series Forecasting Model with Decoupled Training Pipelines [5.543238821368548]
Time series often exhibit significant diversity in their temporal patterns across different time spans and domains.<n>Time Tracker achieves state-of-the-art performance in predicting accuracy, model generalization and adaptability.
arXiv Detail & Related papers (2025-05-21T06:18:41Z) - General Time-series Model for Universal Knowledge Representation of Multivariate Time-Series data [61.163542597764796]
We show that time series with different time granularities (or corresponding frequency resolutions) exhibit distinct joint distributions in the frequency domain.<n>A novel Fourier knowledge attention mechanism is proposed to enable learning time-aware representations from both the temporal and frequency domains.<n>An autoregressive blank infilling pre-training framework is incorporated to time series analysis for the first time, leading to a generative tasks agnostic pre-training strategy.
arXiv Detail & Related papers (2025-02-05T15:20:04Z) - Retrieval-Augmented Diffusion Models for Time Series Forecasting [19.251274915003265]
We propose a Retrieval- Augmented Time series Diffusion model (RATD)
RATD consists of two parts: an embedding-based retrieval process and a reference-guided diffusion model.
Our approach allows leveraging meaningful samples within the database to aid in sampling, thus maximizing the utilization of datasets.
arXiv Detail & Related papers (2024-10-24T13:14:39Z) - Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - TIMBA: Time series Imputation with Bi-directional Mamba Blocks and Diffusion models [0.0]
We propose replacing time-oriented Transformers with State-Space Models (SSM)
We develop a model that integrates SSM, Graph Neural Networks, and node-oriented Transformers to achieve enhanced representations.
arXiv Detail & Related papers (2024-10-08T11:10:06Z) - TimeLDM: Latent Diffusion Model for Unconditional Time Series Generation [2.4454605633840143]
Time series generation is a crucial research topic in the area of decision-making systems.
Recent approaches focus on learning in the data space to model time series information.
We propose TimeLDM, a novel latent diffusion model for high-quality time series generation.
arXiv Detail & Related papers (2024-07-05T01:47:20Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.