TF-CoDiT: Conditional Time Series Synthesis with Diffusion Transformers for Treasury Futures
- URL: http://arxiv.org/abs/2601.11880v1
- Date: Sat, 17 Jan 2026 02:27:56 GMT
- Title: TF-CoDiT: Conditional Time Series Synthesis with Diffusion Transformers for Treasury Futures
- Authors: Yingxiao Zhang, Jiaxin Duan, Junfu Zhang, Ke Feng,
- Abstract summary: Diffusion Transformers (DiT) have achieved milestones in synthesizing financial time-series data, such as stock prices and order flows.<n>This work emphasizes the characteristics of treasury futures data, including its low volume, market dependencies, and the grouped correlations among multivariables.<n>We propose TF-CoDiT, the first DiT framework for language-controlled treasury futures synthesis.
- Score: 9.869634509510016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion Transformers (DiT) have achieved milestones in synthesizing financial time-series data, such as stock prices and order flows. However, their performance in synthesizing treasury futures data is still underexplored. This work emphasizes the characteristics of treasury futures data, including its low volume, market dependencies, and the grouped correlations among multivariables. To overcome these challenges, we propose TF-CoDiT, the first DiT framework for language-controlled treasury futures synthesis. To facilitate low-data learning, TF-CoDiT adapts the standard DiT by transforming multi-channel 1-D time series into Discrete Wavelet Transform (DWT) coefficient matrices. A U-shape VAE is proposed to encode cross-channel dependencies hierarchically into a latent variable and bridge the latent and DWT spaces through decoding, thereby enabling latent diffusion generation. To derive prompts that cover essential conditions, we introduce the Financial Market Attribute Protocol (FinMAP) - a multi-level description system that standardizes daily$/$periodical market dynamics by recognizing 17$/$23 economic indicators from 7/8 perspectives. In our experiments, we gather four types of treasury futures data covering the period from 2015 to 2025, and define data synthesis tasks with durations ranging from one week to four months. Extensive evaluations demonstrate that TF-CoDiT can produce highly authentic data with errors at most 0.433 (MSE) and 0.453 (MAE) to the ground-truth. Further studies evidence the robustness of TF-CoDiT across contracts and temporal horizons.
Related papers
- Temporal Fusion Transformer for Multi-Horizon Probabilistic Forecasting of Weekly Retail Sales [5.023398151088689]
We present a novel study of weekly Walmart sales using a Temporal Fusion Transformer (TFT)<n>The pipeline produces 1--5-week-ahead probabilistic forecasts via Quantile Loss.<n>On a fixed 2012 hold-out dataset, TFT achieves an RMSE of $57.9k USD per store-week and an $R2$ of 0.9875.
arXiv Detail & Related papers (2025-11-01T13:34:29Z) - Test time training enhances in-context learning of nonlinear functions [51.56484100374058]
Test-time training (TTT) enhances model performance by explicitly updating designated parameters prior to each prediction.<n>We investigate the combination of TTT with in-context learning (ICL), where the model is given a few examples from the target distribution at inference time.
arXiv Detail & Related papers (2025-09-30T03:56:44Z) - Adaptive Temporal Fusion Transformers for Cryptocurrency Price Prediction [0.0]
This paper introduces an adaptive TFT modeling approach leveraging dynamic subseries lengths and pattern-based categorization to enhance short-term forecasting.<n>Our results on ETH-USDT 10-minute data over a two-month test period demonstrate that our approach significantly outperforms baseline fixed-length TFT and LSTM models in prediction accuracy and simulated trading profitability.
arXiv Detail & Related papers (2025-09-06T20:04:46Z) - S$^2$Q-VDiT: Accurate Quantized Video Diffusion Transformer with Salient Data and Sparse Token Distillation [55.35880044416441]
We propose S$2$Q-VDiT, a post-training quantization framework for video diffusion models (V-DMs)<n>Under W4A6 quantization, S$2$Q-VDiT achieves lossless performance while delivering $3.9times$ model compression and $1.3times$ inference acceleration.
arXiv Detail & Related papers (2025-08-06T02:12:29Z) - CTBench: Cryptocurrency Time Series Generation Benchmark [11.576635693346486]
We introduce textsfCTBench, the first comprehensive TSG benchmark tailored for the cryptocurrency domain.<n>textsfCTBench curates an open-source dataset from 452 tokens and evaluates TSG models across 13 metrics spanning 5 key dimensions.<n>We benchmark eight representative models from five methodological families over four distinct market regimes, uncovering trade-offs between statistical fidelity and real-world profitability.
arXiv Detail & Related papers (2025-08-03T17:07:08Z) - Time Series Foundation Models for Multivariate Financial Time Series Forecasting [0.0]
Time Series Foundation Models (TSFMs) offer a promising solution through pretraining on diverse time series corpora.<n>This study evaluates two TSFMs across three financial forecasting tasks: US 10-year Treasury yield changes, EUR/USD volatility, and equity spread prediction.
arXiv Detail & Related papers (2025-07-09T21:43:06Z) - Cross-Modal Temporal Fusion for Financial Market Forecasting [3.0756278306759635]
We introduce a transformer-based deep learning framework, Cross-Modal Temporal Fusion (CMTF), that fuses structured and unstructured financial data for improved market prediction.<n> Experimental results using FTSE 100 stock data demonstrate that CMTF achieves superior performance in price direction classification compared to classical and deep learning baselines.
arXiv Detail & Related papers (2025-04-18T07:20:18Z) - Towards Temporal-Aware Multi-Modal Retrieval Augmented Generation in Finance [79.78247299859656]
FinTMMBench is the first comprehensive benchmark for evaluating temporal-aware multi-modal Retrieval-Augmented Generation systems in finance.<n>Built from heterologous data of NASDAQ 100 companies, FinTMMBench offers three significant advantages.
arXiv Detail & Related papers (2025-03-07T07:13:59Z) - FinTSB: A Comprehensive and Practical Benchmark for Financial Time Series Forecasting [58.70072722290475]
Financial time series (FinTS) record the behavior of human-brain-augmented decision-making.<n>FinTSB is a comprehensive and practical benchmark for financial time series forecasting.
arXiv Detail & Related papers (2025-02-26T05:19:16Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Gaussian process imputation of multiple financial series [71.08576457371433]
Multiple time series such as financial indicators, stock prices and exchange rates are strongly coupled due to their dependence on the latent state of the market.
We focus on learning the relationships among financial time series by modelling them through a multi-output Gaussian process.
arXiv Detail & Related papers (2020-02-11T19:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.