T-LLM: Teaching Large Language Models to Forecast Time Series via Temporal Distillation
- URL: http://arxiv.org/abs/2602.01937v1
- Date: Mon, 02 Feb 2026 10:40:27 GMT
- Title: T-LLM: Teaching Large Language Models to Forecast Time Series via Temporal Distillation
- Authors: Suhan Guo, Bingxu Wang, Shaodan Zhang, Furao Shen,
- Abstract summary: Time series forecasting plays a critical role in decision-making across many real-world applications.<n>We propose T-LLM, a temporal distillation framework that equips general-purpose language models with time series forecasting capability.
- Score: 7.6933817667680096
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Time series forecasting plays a critical role in decision-making across many real-world applications. Unlike data in vision and language domains, time series data is inherently tied to the evolution of underlying processes and can only accumulate as real-world time progresses, limiting the effectiveness of scale-driven pretraining alone. This time-bound constraint poses a challenge for enabling large language models (LLMs) to acquire forecasting capability, as existing approaches primarily rely on representation-level alignment or inference-time temporal modules rather than explicitly teaching forecasting behavior to the LLM. We propose T-LLM, a temporal distillation framework that equips general-purpose LLMs with time series forecasting capability by transferring predictive behavior from a lightweight temporal teacher during training. The teacher combines trend modeling and frequency-domain analysis to provide structured temporal supervision, and is removed entirely at inference, leaving the LLM as the sole forecasting model. Experiments on benchmark datasets and infectious disease forecasting tasks demonstrate that T-LLM consistently outperforms existing LLM-based forecasting methods under full-shot, few-shot, and zero-shot settings, while enabling a simple and efficient deployment pipeline.
Related papers
- Is More Context Always Better? Examining LLM Reasoning Capability for Time Interval Prediction [15.45305246863211]
Large Language Models (LLMs) have demonstrated impressive capabilities in reasoning and prediction across different domains.<n>This paper presents a systematic study investigating whether LLMs can predict time intervals between recurring user actions.<n>We benchmark state-of-the-art LLMs in zero-shot settings against both statistical and machine-learning models.
arXiv Detail & Related papers (2026-01-15T07:18:40Z) - Enhancing Zero-Shot Time Series Forecasting in Off-the-Shelf LLMs via Noise Injection [18.267727687739853]
Large Language Models (LLMs) have demonstrated effectiveness as zero-shot time series (TS) forecasters.<n>The key challenge lies in tokenizing TS data into textual representations that align with LLMs' pre-trained knowledge.<n>We introduce two novel TS datasets that fall outside all utilized LLMs' pre-training scopes, and consistently observe improved performance.
arXiv Detail & Related papers (2025-12-23T08:02:33Z) - Beyond Naïve Prompting: Strategies for Improved Zero-shot Context-aided Forecasting with LLMs [57.82819770709032]
Large language models (LLMs) can be effective context-aided forecasters via na"ive direct prompting.<n>ReDP improves interpretability by eliciting explicit reasoning traces, allowing us to assess the model's reasoning over the context.<n>CorDP leverages LLMs solely to refine existing forecasts with context, enhancing their applicability in real-world forecasting pipelines.<n> IC-DP proposes embedding historical examples of context-aided forecasting tasks in the prompt, substantially improving accuracy even for the largest models.
arXiv Detail & Related papers (2025-08-13T16:02:55Z) - Semantic-Enhanced Time-Series Forecasting via Large Language Models [20.383296465541758]
Time series forecasting plays a significant role in finance, energy, meteorology, and IoT applications.<n>Recent studies have leveraged the generalization capabilities of large language models (LLMs) to adapt to time series forecasting, achieving promising performance.<n>We propose a novel Semantic-Enhanced LLM (SE-LLM) that explores the inherent periodicity and anomalous characteristics of time series to embed into the semantic space.
arXiv Detail & Related papers (2025-08-11T07:19:21Z) - Time-Prompt: Integrated Heterogeneous Prompts for Unlocking LLMs in Time Series Forecasting [13.283980715705693]
Time series forecasting aims to model temporal dependencies among variables for future state inference.<n>Deep learning-based methods have achieved remarkable progress, but they still exhibit suboptimal performance in long-term forecasting.<n>We propose Time-Prompt, a framework for activating large language models for time series forecasting.
arXiv Detail & Related papers (2025-06-21T08:22:25Z) - Forecasting Time Series with LLMs via Patch-Based Prompting and Decomposition [48.50019311384125]
We explore simple and flexible prompt-based strategies that enable LLMs to perform time series forecasting without extensive retraining.<n>We propose our own method, PatchInstruct, which enables LLMs to make precise and effective predictions.
arXiv Detail & Related papers (2025-06-15T19:42:58Z) - Efficient Model Selection for Time Series Forecasting via LLMs [52.31535714387368]
We propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection.<n>Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs.
arXiv Detail & Related papers (2025-04-02T20:33:27Z) - LLM-PS: Empowering Large Language Models for Time Series Forecasting with Temporal Patterns and Semantics [56.99021951927683]
Time Series Forecasting (TSF) is critical in many real-world domains like financial planning and health monitoring.<n>Existing Large Language Models (LLMs) usually perform suboptimally because they neglect the inherent characteristics of time series data.<n>We propose LLM-PS to empower the LLM for TSF by learning the fundamental textitPatterns and meaningful textitSemantics from time series data.
arXiv Detail & Related papers (2025-03-12T11:45:11Z) - CALF: Aligning LLMs for Time Series Forecasting via Cross-modal Fine-Tuning [59.88924847995279]
We propose a novel Cross-Modal LLM Fine-Tuning (CALF) framework for MTSF.<n>To reduce the distribution discrepancy, we develop the cross-modal match module.<n>CALF establishes state-of-the-art performance for both long-term and short-term forecasting tasks.
arXiv Detail & Related papers (2024-03-12T04:04:38Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
AutoTimes projects time series into the embedding space of language tokens and autoregressively generates future predictions with arbitrary lengths.
We formulate time series as prompts, extending the context for prediction beyond the lookback window.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over $5times$ training/inference speedup.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.