ChronoSteer: Bridging Large Language Model and Time Series Foundation Model via Synthetic Data
- URL: http://arxiv.org/abs/2505.10083v1
- Date: Thu, 15 May 2025 08:37:23 GMT
- Title: ChronoSteer: Bridging Large Language Model and Time Series Foundation Model via Synthetic Data
- Authors: Chengsen Wang, Qi Qi, Zhongwen Rao, Lujia Pan, Jingyu Wang, Jianxin Liao,
- Abstract summary: We introduce ChronoSteer, a multimodal TSFM that can be steered through textual revision instructions.<n>To mitigate the shortage of cross-modal instruction-series paired data, we devise a two-stage training strategy based on synthetic data.<n> ChronoSteer achieves a 25.7% improvement in prediction accuracy compared to the unimodal backbone and a 22.5% gain over the previous state-of-the-art multimodal method.
- Score: 22.81326423408988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional forecasting methods rely on unimodal time series data, limiting their ability to exploit rich textual information. Recently, large language models (LLMs) and time series foundation models (TSFMs) have demonstrated powerful capability in textual reasoning and temporal modeling, respectively. Integrating the strengths of both to construct a multimodal model that concurrently leverages both temporal and textual information for future inference has emerged as a critical research challenge. To address the scarcity of event-series paired data, we propose a decoupled framework: an LLM is employed to transform textual events into revision instructions, which are then used to steer the output of TSFM. To implement this framework, we introduce ChronoSteer, a multimodal TSFM that can be steered through textual revision instructions, effectively bridging LLM and TSFM. Moreover, to mitigate the shortage of cross-modal instruction-series paired data, we devise a two-stage training strategy based on synthetic data. In addition, we also construct a high-quality multimodal time series forecasting benchmark to address the information leakage concerns during evaluation. After integrating with an LLM, ChronoSteer, which is trained exclusively on synthetic data, achieves a 25.7% improvement in prediction accuracy compared to the unimodal backbone and a 22.5% gain over the previous state-of-the-art multimodal method.
Related papers
- LLM-Prompt: Integrated Heterogeneous Prompts for Unlocking LLMs in Time Series Forecasting [4.881217428928315]
Time series forecasting aims to model temporal dependencies among variables for future state inference.<n>Recent research demonstrates that large language models (LLMs) achieve promising performance in time series forecasting.<n>We propose LLM-Prompt, an LLM-based time series forecasting framework integrating multi-prompt information and cross-modal semantic alignment.
arXiv Detail & Related papers (2025-06-21T08:22:25Z) - Does Multimodality Lead to Better Time Series Forecasting? [84.74978289870155]
It remains unclear whether and under what conditions such multimodal integration consistently yields gains.<n>We evaluate two popular multimodal forecasting paradigms: aligning-based methods, which align time series and text representations; and prompting-based methods, which directly prompt large language models for forecasting.<n>Our findings highlight that on the modeling side, incorporating text information is most helpful given (1) high-capacity text models, (2) comparatively weaker time series models, and (3) appropriate aligning strategies.
arXiv Detail & Related papers (2025-06-20T23:55:56Z) - DanmakuTPPBench: A Multi-modal Benchmark for Temporal Point Process Modeling and Understanding [17.450031813318965]
We introduce DanmakuTPPBench, a benchmark designed to advance multi-modal Temporal Point Process (TPP) modeling.<n>TPPs have been widely studied for modeling temporal event sequences, but existing datasets are predominantly unimodal.<n>Our benchmark establishes strong baselines and calls for further integration of TPP modeling into the multi-modal language modeling landscape.
arXiv Detail & Related papers (2025-05-23T22:38:28Z) - Dual-Forecaster: A Multimodal Time Series Model Integrating Descriptive and Predictive Texts [5.873261646876953]
We propose Dual-Forecaster, a pioneering multimodal time series model that combines both descriptively historical textual information and predictive textual insights.<n>Our comprehensive evaluations on fifteen multimodal time series datasets demonstrate that Dual-Forecaster is a distinctly effective multimodal time series model.
arXiv Detail & Related papers (2025-05-02T09:24:31Z) - Multimodal Conditioned Diffusive Time Series Forecasting [16.72476672866356]
We propose a multimodal conditioned diffusion model for time series forecasting (TSF)<n>Timestamps and texts are combined to establish temporal and semantic correlations among different data points.<n>Experiments on real-world benchmark datasets demonstrate that the proposed MCD-TSF model achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-04-28T10:56:23Z) - Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop [63.34626300024294]
TimeXL is a multi-modal prediction framework that integrates a prototype-based time series encoder.<n>It produces more accurate predictions and interpretable explanations.<n> Empirical evaluations on four real-world datasets demonstrate that TimeXL achieves up to 8.9% improvement in AUC.
arXiv Detail & Related papers (2025-03-02T20:40:53Z) - TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents [52.13094810313054]
TimeCAP is a time-series processing framework that creatively employs Large Language Models (LLMs) as contextualizers of time series data.<n>TimeCAP incorporates two independent LLM agents: one generates a textual summary capturing the context of the time series, while the other uses this enriched summary to make more informed predictions.<n> Experimental results on real-world datasets demonstrate that TimeCAP outperforms state-of-the-art methods for time series event prediction.
arXiv Detail & Related papers (2025-02-17T04:17:27Z) - TempoGPT: Enhancing Time Series Reasoning via Quantizing Embedding [13.996105878417204]
We propose a multi-modal time series data construction approach and a multi-modal time series language model (TLM), TempoGPT.<n>We construct multi-modal data for complex reasoning tasks by analyzing the variable-system relationships within a white-box system.<n>Extensive experiments demonstrate that TempoGPT accurately perceives temporal information, logically infers conclusions, and achieves state-of-the-art in the constructed complex time series reasoning tasks.
arXiv Detail & Related papers (2025-01-13T13:47:05Z) - Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.<n>We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.<n>We propose a simple yet effective LLM prompting method that outperforms all other tested methods on our benchmark.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Metadata Matters for Time Series: Informative Forecasting with Transformers [70.38241681764738]
We propose a Metadata-informed Time Series Transformer (MetaTST) for time series forecasting.
To tackle the unstructured nature of metadata, MetaTST formalizes them into natural languages by pre-designed templates.
A Transformer encoder is employed to communicate series and metadata tokens, which can extend series representations by metadata information.
arXiv Detail & Related papers (2024-10-04T11:37:55Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.