Time Series Forecasting with LLMs: Understanding and Enhancing Model
Capabilities
- URL: http://arxiv.org/abs/2402.10835v2
- Date: Mon, 19 Feb 2024 02:30:15 GMT
- Title: Time Series Forecasting with LLMs: Understanding and Enhancing Model
Capabilities
- Authors: Mingyu Jin, Hua Tang, Chong Zhang, Qinkai Yu, Chengzhi Liu, Suiyuan
Zhu, Yongfeng Zhang, Mengnan Du
- Abstract summary: Large language models (LLMs) have been applied in many fields with rapid development in recent years.
This paper shows that LLMs excel in predicting time series with clear patterns and trends but face challenges with datasets lacking periodicity.
In addition, the input strategy is investigated, and it is found that incorporating external knowledge and adopting natural language paraphrases positively affects the predictive performance of LLMs for time series.
- Score: 39.874834611685124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have been applied in many fields with rapid
development in recent years. As a classic machine learning task, time series
forecasting has recently received a boost from LLMs. However, there is a
research gap in the LLMs' preferences in this field. In this paper, by
comparing LLMs with traditional models, many properties of LLMs in time series
prediction are found. For example, our study shows that LLMs excel in
predicting time series with clear patterns and trends but face challenges with
datasets lacking periodicity. We explain our findings through designing prompts
to require LLMs to tell the period of the datasets. In addition, the input
strategy is investigated, and it is found that incorporating external knowledge
and adopting natural language paraphrases positively affects the predictive
performance of LLMs for time series. Overall, this study contributes to insight
into the advantages and limitations of LLMs in time series forecasting under
different conditions.
Related papers
- Enhancing Temporal Understanding in LLMs for Semi-structured Tables [50.59009084277447]
We conduct a comprehensive analysis of temporal datasets to pinpoint the specific limitations of large language models (LLMs)
Our investigation leads to enhancements in TempTabQA, a dataset specifically designed for temporal temporal question answering.
We introduce a novel approach, C.L.E.A.R. to strengthen LLM capabilities in this domain.
arXiv Detail & Related papers (2024-07-22T20:13:10Z) - A Comprehensive Evaluation of Large Language Models on Temporal Event Forecasting [45.0261082985087]
We conduct a comprehensive evaluation of Large Language Models (LLMs) for temporal event forecasting.
We find that directly integrating raw texts into the input of LLMs does not enhance zero-shot extrapolation performance.
In contrast, incorporating raw texts in specific complex events and fine-tuning LLMs significantly improves performance.
arXiv Detail & Related papers (2024-07-16T11:58:54Z) - Are Language Models Actually Useful for Time Series Forecasting? [21.378728572776897]
Large language models (LLMs) are being applied to time series tasks, particularly time series forecasting.
We find that removing the LLM component or replacing it with a basic attention layer does not degrade the forecasting results.
We also find that pretrained LLMs do no better than models trained from scratch, do not represent the sequential dependencies in time series, and do not assist in few-shot settings.
arXiv Detail & Related papers (2024-06-22T03:33:38Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - Empowering Time Series Analysis with Large Language Models: A Survey [24.202539098675953]
We provide a systematic overview of methods that leverage large language models for time series analysis.
Specifically, we first state the challenges and motivations of applying language models in the context of time series.
Next, we categorize existing methods into different groups (i.e., direct query, tokenization, prompt design, fine-tune, and model integration) and highlight the key ideas within each group.
arXiv Detail & Related papers (2024-02-05T16:46:35Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
We propose AutoTimes as autoregressive time series forecasters, which independently projects time series segments into the embedding space and autoregressively generates future predictions with arbitrary lengths.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over 5 times training/inference speedup compared to advanced LLM-based forecasters.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Are Large Language Models Temporally Grounded? [38.481606493496514]
We provide Large language models (LLMs) with textual narratives.
We probe them with respect to their common-sense knowledge of the structure and duration of events.
We evaluate state-of-the-art LLMs on three tasks reflecting these abilities.
arXiv Detail & Related papers (2023-11-14T18:57:15Z) - MenatQA: A New Dataset for Testing the Temporal Comprehension and
Reasoning Abilities of Large Language Models [17.322480769274062]
Large language models (LLMs) have shown nearly saturated performance on many natural language processing (NLP) tasks.
This paper constructs Multiple Sensitive Factors Time QA (MenatQA) with total 2,853 samples for evaluating the time comprehension and reasoning abilities of LLMs.
arXiv Detail & Related papers (2023-10-08T13:19:52Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.