Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop
- URL: http://arxiv.org/abs/2503.01013v1
- Date: Sun, 02 Mar 2025 20:40:53 GMT
- Title: Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop
- Authors: Yushan Jiang, Wenchao Yu, Geon Lee, Dongjin Song, Kijung Shin, Wei Cheng, Yanchi Liu, Haifeng Chen,
- Abstract summary: TimeXL is a multi-modal prediction framework that integrates a prototype-based time series encoder.<n>It produces more accurate predictions and interpretable explanations.<n> Empirical evaluations on four real-world datasets demonstrate that TimeXL achieves up to 8.9% improvement in AUC.
- Score: 63.34626300024294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series analysis provides essential insights for real-world system dynamics and informs downstream decision-making, yet most existing methods often overlook the rich contextual signals present in auxiliary modalities. To bridge this gap, we introduce TimeXL, a multi-modal prediction framework that integrates a prototype-based time series encoder with three collaborating Large Language Models (LLMs) to deliver more accurate predictions and interpretable explanations. First, a multi-modal prototype-based encoder processes both time series and textual inputs to generate preliminary forecasts alongside case-based rationales. These outputs then feed into a prediction LLM, which refines the forecasts by reasoning over the encoder's predictions and explanations. Next, a reflection LLM compares the predicted values against the ground truth, identifying textual inconsistencies or noise. Guided by this feedback, a refinement LLM iteratively enhances text quality and triggers encoder retraining. This closed-loop workflow -- prediction, critique (reflect), and refinement -- continuously boosts the framework's performance and interpretability. Empirical evaluations on four real-world datasets demonstrate that TimeXL achieves up to 8.9\% improvement in AUC and produces human-centric, multi-modal explanations, highlighting the power of LLM-driven reasoning for time series prediction.
Related papers
- Forecasting from Clinical Textual Time Series: Adaptations of the Encoder and Decoder Language Model Families [6.882042556551609]
We introduce the forecasting problem from textual time series, where timestamped clinical findings serve as the primary input for prediction.
We evaluate a diverse suite of models, including fine-tuned decoder-based large language models and encoder-based transformers.
arXiv Detail & Related papers (2025-04-14T15:48:56Z) - TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents [52.13094810313054]
TimeCAP is a time-series processing framework that creatively employs Large Language Models (LLMs) as contextualizers of time series data.<n>TimeCAP incorporates two independent LLM agents: one generates a textual summary capturing the context of the time series, while the other uses this enriched summary to make more informed predictions.<n> Experimental results on real-world datasets demonstrate that TimeCAP outperforms state-of-the-art methods for time series event prediction.
arXiv Detail & Related papers (2025-02-17T04:17:27Z) - Time-VLM: Exploring Multimodal Vision-Language Models for Augmented Time Series Forecasting [26.4608782425897]
Time-VLM is a novel framework that bridges temporal, visual, and textual modalities for enhanced forecasting.<n>Our framework comprises three key components: (1) a Retrieval-Augmented Learner, which extracts enriched temporal features through memory bank interactions; (2) a Vision-Augmented Learner, which encodes time series as informative images; and (3) a Text-Augmented Learner, which generates contextual textual descriptions.
arXiv Detail & Related papers (2025-02-06T05:59:45Z) - Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
We propose a simple yet effective LLM prompting method that outperforms all other tested methods on our benchmark.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Beam Prediction based on Large Language Models [51.45077318268427]
We formulate the millimeter wave (mmWave) beam prediction problem as a time series forecasting task.
We transform historical observations into text-based representations using a trainable tokenizer.
Our method harnesses the power of LLMs to predict future optimal beams.
arXiv Detail & Related papers (2024-08-16T12:40:01Z) - CALF: Aligning LLMs for Time Series Forecasting via Cross-modal Fine-Tuning [59.88924847995279]
We propose a novel Cross-Modal LLM Fine-Tuning (CALF) framework for MTSF.
To reduce the distribution discrepancy, we develop the cross-modal match module.
CALF establishes state-of-the-art performance for both long-term and short-term forecasting tasks.
arXiv Detail & Related papers (2024-03-12T04:04:38Z) - Multi-Patch Prediction: Adapting LLMs for Time Series Representation
Learning [22.28251586213348]
aLLM4TS is an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning.
A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding.
arXiv Detail & Related papers (2024-02-07T13:51:26Z) - Understanding the Role of Textual Prompts in LLM for Time Series Forecasting: an Adapter View [21.710722062737577]
In the burgeoning domain of Large Language Models (LLMs), there is a growing interest in applying LLM to time series forecasting.
This study aims to understand how and why the integration of textual prompts into LLM can effectively improve the prediction accuracy of time series.
arXiv Detail & Related papers (2023-11-24T16:32:47Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.