Time Series Foundation Models for Multivariate Financial Time Series Forecasting
- URL: http://arxiv.org/abs/2507.07296v1
- Date: Wed, 09 Jul 2025 21:43:06 GMT
- Title: Time Series Foundation Models for Multivariate Financial Time Series Forecasting
- Authors: Ben A. Marconi,
- Abstract summary: Time Series Foundation Models (TSFMs) offer a promising solution through pretraining on diverse time series corpora.<n>This study evaluates two TSFMs across three financial forecasting tasks: US 10-year Treasury yield changes, EUR/USD volatility, and equity spread prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Financial time series forecasting presents significant challenges due to complex nonlinear relationships, temporal dependencies, variable interdependencies and limited data availability, particularly for tasks involving low-frequency data, newly listed instruments, or emerging market assets. Time Series Foundation Models (TSFMs) offer a promising solution through pretraining on diverse time series corpora followed by task-specific adaptation. This study evaluates two TSFMs (Tiny Time Mixers (TTM) and Chronos) across three financial forecasting tasks: US 10-year Treasury yield changes, EUR/USD volatility, and equity spread prediction. Results demonstrate that TTM exhibits strong transferability. When fine-tuning both the pretrained version of TTM and an untrained model with the same architecture, the pretrained version achieved 25-50% better performance when fine-tuned on limited data and 15-30% improvements even when fine-tuned on lengthier datasets. Notably, TTM's zero-shot performance outperformed naive benchmarks in volatility forecasting and equity spread prediction, with the latter demonstrating that TSFMs can surpass traditional benchmark models without fine-tuning. The pretrained model consistently required 3-10 fewer years of data to achieve comparable performance levels compared to the untrained model, demonstrating significant sample-efficiency gains. However, while TTM outperformed naive baselines, traditional specialised models matched or exceeded its performance in two of three tasks, suggesting TSFMs prioritise breadth over task-specific optimisation. These findings indicate that TSFMs, though still nascent, offer substantial promise for financial forecasting-particularly in noisy, data-constrained tasks-but achieving competitive performance likely requires domain-specific pretraining and architectural refinements tailored to financial time series characteristics.
Related papers
- Multi-Scale Finetuning for Encoder-based Time Series Foundation Models [56.503053716053]
Time series foundation models (TSFMs) demonstrate impressive zero-shot performance for time series forecasting.<n>We argue that it falls short of fully leveraging TSFMs' capabilities, often resulting in overfitting and suboptimal performance.<n>We propose textbftextscfinetextbftextsctuning (textbfMSFT), a simple yet general framework that explicitly integrates multi-scale modeling into the finetuning process.
arXiv Detail & Related papers (2025-06-17T01:06:01Z) - Can Time-Series Foundation Models Perform Building Energy Management Tasks? [5.450531952940644]
Building energy management tasks require processing and learning from a variety of time-series data.<n>Existing solutions rely on bespoke task- and data-specific models to perform these tasks.<n>Inspired by the transformative success of Large Language Models (LLMs), Time-Series Foundation Models (TSFMs) have the potential to change this.
arXiv Detail & Related papers (2025-06-12T19:45:10Z) - Generalisation Bounds of Zero-Shot Economic Forecasting using Time Series Foundation Models [1.131401554081614]
This study investigates zero-shot forecasting capabilities of Time Series Foundation Models for macroeconomic indicators.<n>We back-tested three state-of-the-art TSFMs under data-scarce conditions and structural breaks.<n>Our findings suggest that, without any fine-tuning, TSFMs can match or exceed classical models during stable economic conditions.
arXiv Detail & Related papers (2025-05-30T03:10:46Z) - Less is More: Unlocking Specialization of Time Series Foundation Models via Structured Pruning [29.377178687865136]
Time Series Foundation Models pre-train vast parameters and achieve remarkable zero-shot forecasting performance.<n>Surprisingly, even after fine-tuning, TSFMs cannot consistently outperform smaller, specialized models trained on full-shot downstream data.<n>We propose a structured pruning method to regularize the subsequent fine-tuning process by focusing it on a more relevant and compact parameter space.
arXiv Detail & Related papers (2025-05-29T07:33:49Z) - DELPHYNE: A Pre-Trained Model for General and Financial Time Series [2.601248228220401]
Time-series data is valuable in financial applications, where it helps in detecting patterns, understanding market behavior, and making informed decisions based on historical data.<n>Recent advances in language modeling have led to the rise of time-series pre-trained models that are trained on vast collections of datasets and applied to diverse tasks across financial domains.<n>However, existing time-series pre-trained models have not shown boosts in performance over simple finance benchmarks in both zero-shot and fine-tuning settings.
arXiv Detail & Related papers (2025-05-12T16:53:29Z) - FinTSB: A Comprehensive and Practical Benchmark for Financial Time Series Forecasting [58.70072722290475]
Financial time series (FinTS) record the behavior of human-brain-augmented decision-making.<n>FinTSB is a comprehensive and practical benchmark for financial time series forecasting.
arXiv Detail & Related papers (2025-02-26T05:19:16Z) - BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - Rating Multi-Modal Time-Series Forecasting Models (MM-TSFM) for Robustness Through a Causal Lens [10.103561529332184]
We focus on multi-modal time-series forecasting, where imprecision due to noisy or incorrect data can lead to erroneous predictions.
We introduce a rating methodology to assess the robustness of Multi-Modal Time-Series Forecasting Models.
arXiv Detail & Related papers (2024-06-12T17:39:16Z) - Mlinear: Rethink the Linear Model for Time-series Forecasting [9.841293660201261]
Mlinear is a simple yet effective method based mainly on linear layers.
We introduce a new loss function that significantly outperforms the widely used mean squared error (MSE) on multiple datasets.
Our method significantly outperforms PatchTST with a ratio of 21:3 at 336 sequence length input and 29:10 at 512 sequence length input.
arXiv Detail & Related papers (2023-05-08T15:54:18Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - Low-Rank Temporal Attention-Augmented Bilinear Network for financial
time-series forecasting [93.73198973454944]
Deep learning models have led to significant performance improvements in many problems coming from different domains, including prediction problems of financial time-series data.
The Temporal Attention-Augmented Bilinear network was recently proposed as an efficient and high-performing model for Limit Order Book time-series forecasting.
In this paper, we propose a low-rank tensor approximation of the model to further reduce the number of trainable parameters and increase its speed.
arXiv Detail & Related papers (2021-07-05T10:15:23Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.