Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
- URL: http://arxiv.org/abs/2310.01728v2
- Date: Mon, 29 Jan 2024 06:27:53 GMT
- Title: Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
- Authors: Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming
Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen
- Abstract summary: Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
- Score: 110.20279343734548
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series forecasting holds significant importance in many real-world
dynamic systems and has been extensively studied. Unlike natural language
process (NLP) and computer vision (CV), where a single large model can tackle
multiple tasks, models for time series forecasting are often specialized,
necessitating distinct designs for different tasks and applications. While
pre-trained foundation models have made impressive strides in NLP and CV, their
development in time series domains has been constrained by data sparsity.
Recent studies have revealed that large language models (LLMs) possess robust
pattern recognition and reasoning abilities over complex sequences of tokens.
However, the challenge remains in effectively aligning the modalities of time
series data and natural language to leverage these capabilities. In this work,
we present Time-LLM, a reprogramming framework to repurpose LLMs for general
time series forecasting with the backbone language models kept intact. We begin
by reprogramming the input time series with text prototypes before feeding it
into the frozen LLM to align the two modalities. To augment the LLM's ability
to reason with time series data, we propose Prompt-as-Prefix (PaP), which
enriches the input context and directs the transformation of reprogrammed input
patches. The transformed time series patches from the LLM are finally projected
to obtain the forecasts. Our comprehensive evaluations demonstrate that
Time-LLM is a powerful time series learner that outperforms state-of-the-art,
specialized forecasting models. Moreover, Time-LLM excels in both few-shot and
zero-shot learning scenarios.
Related papers
- Adapting Large Language Models for Time Series Modeling via a Novel Parameter-efficient Adaptation Method [9.412920379798928]
Time series modeling holds significant importance in many real-world applications.
We propose the Time-LlaMA framework to align the time series and natural language modalities.
We show that our proposed method achieves the state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2025-02-19T13:52:26Z) - TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents [52.13094810313054]
TimeCAP is a time-series processing framework that creatively employs Large Language Models (LLMs) as contextualizers of time series data.
TimeCAP incorporates two independent LLM agents: one generates a textual summary capturing the context of the time series, while the other uses this enriched summary to make more informed predictions.
Experimental results on real-world datasets demonstrate that TimeCAP outperforms state-of-the-art methods for time series event prediction.
arXiv Detail & Related papers (2025-02-17T04:17:27Z) - Time Series Language Model for Descriptive Caption Generation [11.796431549951055]
We introduce TSLM, a novel time series language model designed specifically for time series captioning.
TSLM operates as an encoder-decoder model, leveraging both text prompts and time series data representations.
We show that TSLM outperforms existing state-of-the-art approaches from multiple data modalities by a significant margin.
arXiv Detail & Related papers (2025-01-03T14:34:30Z) - Metadata Matters for Time Series: Informative Forecasting with Transformers [70.38241681764738]
We propose a Metadata-informed Time Series Transformer (MetaTST) for time series forecasting.
To tackle the unstructured nature of metadata, MetaTST formalizes them into natural languages by pre-designed templates.
A Transformer encoder is employed to communicate series and metadata tokens, which can extend series representations by metadata information.
arXiv Detail & Related papers (2024-10-04T11:37:55Z) - An Evaluation of Standard Statistical Models and LLMs on Time Series Forecasting [16.583730806230644]
This study highlights the key challenges that large language models encounter in the context of time series prediction.
The empirical results indicate that while large language models can perform well in zero-shot forecasting for certain datasets, their predictive accuracy diminishes notably when confronted with diverse time series data and traditional signals.
arXiv Detail & Related papers (2024-08-09T05:13:03Z) - Empowering Time Series Analysis with Large Language Models: A Survey [24.202539098675953]
We provide a systematic overview of methods that leverage large language models for time series analysis.
Specifically, we first state the challenges and motivations of applying language models in the context of time series.
Next, we categorize existing methods into different groups (i.e., direct query, tokenization, prompt design, fine-tune, and model integration) and highlight the key ideas within each group.
arXiv Detail & Related papers (2024-02-05T16:46:35Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
AutoTimes projects time series into the embedding space of language tokens and autoregressively generates future predictions with arbitrary lengths.
We formulate time series as prompts, extending the context for prediction beyond the lookback window.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over $5times$ training/inference speedup.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Large Language Models Are Zero-Shot Time Series Forecasters [48.73953666153385]
By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text.
We find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-shot extrapolate time series at a level comparable to or exceeding the performance of purpose-built time series models trained on the downstream tasks.
arXiv Detail & Related papers (2023-10-11T19:01:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.