TS-Reasoner: Aligning Time Series Foundation Models with LLM Reasoning
- URL: http://arxiv.org/abs/2510.03519v1
- Date: Fri, 03 Oct 2025 21:20:54 GMT
- Title: TS-Reasoner: Aligning Time Series Foundation Models with LLM Reasoning
- Authors: Fangxu Yu, Hongyu Zhao, Tianyi Zhou,
- Abstract summary: Time series reasoning is crucial to decision-making in diverse domains, including finance, energy usage, traffic, weather, and scientific discovery.<n>LLMs often struggle with the numerical understanding of time series data.<n>We develop a training recipe that applies instruction finetuning after the alignment pretraining.
- Score: 17.69077698133793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series reasoning is crucial to decision-making in diverse domains, including finance, energy usage, traffic, weather, and scientific discovery. While existing time series foundation models (TSFMs) can capture low-level dynamic patterns and provide accurate forecasting, further analysis usually requires additional background knowledge and sophisticated reasoning, which are lacking in most TSFMs but can be achieved through large language models (LLMs). On the other hand, without expensive post-training, LLMs often struggle with the numerical understanding of time series data. Although it is intuitive to integrate the two types of models, developing effective training recipes that align the two modalities for reasoning tasks is still an open challenge. To this end, we propose TS-Reasoner that aligns the latent representations of TSFMs with the textual inputs of LLMs for downstream understanding/reasoning tasks. Specifically, we propose a simple yet effective method to curate diverse, synthetic pairs of time series and textual captions for alignment training. We then develop a two-stage training recipe that applies instruction finetuning after the alignment pretraining. Unlike existing works that train an LLM to take time series as inputs, we leverage a pretrained TSFM and freeze it during training. Extensive experiments on several benchmarks demonstrate that TS-Reasoner not only outperforms a wide range of prevailing LLMs, Vision Language Models (VLMs), and Time Series LLMs, but also achieves this with remarkable data efficiency, e.g., using less than half the training data.
Related papers
- Eliciting Chain-of-Thought Reasoning for Time Series Analysis using Reinforcement Learning [2.426309874608745]
Complex numerical time series analysis often demands multi-step reasoning capabilities beyond current models' reach.<n>We introduce Chain Of thought for Understanding Numerical Time Series (COUNTS), the first framework that trains large language models to perform Chain-of-Thought (CoT) reasoning across diverse time series tasks using reinforcement learning (RL) with verifiable rewards.<n>Our experiments demonstrate that this RL-driven approach with intermediate CoT reasoning significantly enhances LLM performance across various time series analysis tasks, opening new possibilities for complex temporal data reasoning.
arXiv Detail & Related papers (2025-10-01T17:02:28Z) - SciTS: Scientific Time Series Understanding and Generation with LLMs [53.35994674187729]
We introduce SciTS, a benchmark spanning 12 scientific domains and 43 tasks.<n>We benchmark 17 models, including text-only LLMs, multimodal LLMs, and unified time series models.<n>We then introduce Time Omni, a framework that equips LLMs with the ability to understand and generate time series.
arXiv Detail & Related papers (2025-09-26T09:25:16Z) - How Important are Videos for Training Video LLMs? [55.965474658745315]
We present findings indicating Video LLMs are more capable of temporal reasoning after image-only training than one would assume.<n>We introduce a simple finetuning scheme involving sequences of annotated images and questions targeting temporal capabilities.<n>This suggests suboptimal utilization of rich temporal features found in real video by current models.
arXiv Detail & Related papers (2025-06-07T21:32:19Z) - LLM-PS: Empowering Large Language Models for Time Series Forecasting with Temporal Patterns and Semantics [56.99021951927683]
Time Series Forecasting (TSF) is critical in many real-world domains like financial planning and health monitoring.<n>Existing Large Language Models (LLMs) usually perform suboptimally because they neglect the inherent characteristics of time series data.<n>We propose LLM-PS to empower the LLM for TSF by learning the fundamental textitPatterns and meaningful textitSemantics from time series data.
arXiv Detail & Related papers (2025-03-12T11:45:11Z) - Position: Empowering Time Series Reasoning with Multimodal LLMs [49.73647759532127]
We argue that multimodal language models (MLLMs) can enable more powerful and flexible reasoning for time series analysis.<n>We call on researchers and practitioners to leverage this potential by developing strategies that prioritize trust, interpretability, and robust reasoning in MLLMs.
arXiv Detail & Related papers (2025-02-03T16:10:48Z) - Large Language Models are Few-shot Multivariate Time Series Classifiers [23.045734479292356]
Large Language Models (LLMs) have been extensively applied in time series analysis.<n>Yet, their utility in the few-shot classification (i.e., a crucial training scenario) is underexplored.<n>We aim to leverage the extensive pre-trained knowledge in LLMs to overcome the data scarcity problem.
arXiv Detail & Related papers (2025-01-30T03:59:59Z) - CALF: Aligning LLMs for Time Series Forecasting via Cross-modal Fine-Tuning [59.88924847995279]
We propose a novel Cross-Modal LLM Fine-Tuning (CALF) framework for MTSF.<n>To reduce the distribution discrepancy, we develop the cross-modal match module.<n>CALF establishes state-of-the-art performance for both long-term and short-term forecasting tasks.
arXiv Detail & Related papers (2024-03-12T04:04:38Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
AutoTimes projects time series into the embedding space of language tokens and autoregressively generates future predictions with arbitrary lengths.
We formulate time series as prompts, extending the context for prediction beyond the lookback window.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over $5times$ training/inference speedup.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - VILA: On Pre-training for Visual Language Models [74.08039416548209]
We study the design options for VLM pre-training through step-by-step controllable comparisons.
We build VILA, a Visual Language model family that consistently outperforms the state-of-the-art models.
arXiv Detail & Related papers (2023-12-12T18:58:18Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series Forecasters [11.796765525301051]
We propose a framework for time-series forecasting with pre-trained Large Language Models (LLMs)<n>LLM4TS consists of a two-stage fine-tuning strategy to align LLMs with the nuances of time-series data, and the forecasting fine-tuning stage for downstream time-series forecasting tasks.<n>Our framework features a novel two-level aggregation method that integrates multi-scale temporal data within pre-trained LLMs, enhancing their ability to interpret time-specific information.
arXiv Detail & Related papers (2023-08-16T16:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.