Empowering Time Series Analysis with Large Language Models: A Survey
- URL: http://arxiv.org/abs/2402.03182v1
- Date: Mon, 5 Feb 2024 16:46:35 GMT
- Title: Empowering Time Series Analysis with Large Language Models: A Survey
- Authors: Yushan Jiang, Zijie Pan, Xikun Zhang, Sahil Garg, Anderson Schneider,
Yuriy Nevmyvaka, Dongjin Song
- Abstract summary: We provide a systematic overview of methods that leverage large language models for time series analysis.
Specifically, we first state the challenges and motivations of applying language models in the context of time series.
Next, we categorize existing methods into different groups (i.e., direct query, tokenization, prompt design, fine-tune, and model integration) and highlight the key ideas within each group.
- Score: 24.202539098675953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, remarkable progress has been made over large language models
(LLMs), demonstrating their unprecedented capability in varieties of natural
language tasks. However, completely training a large general-purpose model from
the scratch is challenging for time series analysis, due to the large volumes
and varieties of time series data, as well as the non-stationarity that leads
to concept drift impeding continuous model adaptation and re-training. Recent
advances have shown that pre-trained LLMs can be exploited to capture complex
dependencies in time series data and facilitate various applications. In this
survey, we provide a systematic overview of existing methods that leverage LLMs
for time series analysis. Specifically, we first state the challenges and
motivations of applying language models in the context of time series as well
as brief preliminaries of LLMs. Next, we summarize the general pipeline for
LLM-based time series analysis, categorize existing methods into different
groups (i.e., direct query, tokenization, prompt design, fine-tune, and model
integration), and highlight the key ideas within each group. We also discuss
the applications of LLMs for both general and spatial-temporal time series
data, tailored to specific domains. Finally, we thoroughly discuss future
research opportunities to empower time series analysis with LLMs.
Related papers
- Towards Time Series Reasoning with LLMs [0.4369058206183195]
We propose a novel multi-modal time-series LLM approach that learns generalizable information across various domains with powerful zero-shot performance.
We show that our model learns a latent representation that reflects specific time-series features, as well as outperforming GPT-4o on a set of zero-shot reasoning tasks.
arXiv Detail & Related papers (2024-09-17T17:23:44Z) - Deep Time Series Models: A Comprehensive Survey and Benchmark [74.28364194333447]
Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities [46.02234423159257]
Large language models (LLMs) have been applied in many fields and have developed rapidly in recent years.
Recent works treat large language models as emphzero-shot time series reasoners without further fine-tuning.
Our study shows that LLMs perform well in predicting time series with clear patterns and trends, but face challenges with datasets lacking periodicity.
arXiv Detail & Related papers (2024-02-16T17:15:28Z) - Position: What Can Large Language Models Tell Us about Time Series Analysis [69.70906014827547]
We argue that current large language models (LLMs) have the potential to revolutionize time series analysis.
Such advancement could unlock a wide range of possibilities, including time series modality switching and question answering.
arXiv Detail & Related papers (2024-02-05T04:17:49Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
AutoTimes projects time series into the embedding space of language tokens and autoregressively generates future predictions with arbitrary lengths.
We formulate time series as prompts, extending the context for prediction beyond the lookback window.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over $5times$ training/inference speedup.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Large Language Models for Time Series: A Survey [34.24258745427964]
Large Language Models (LLMs) have seen significant use in domains such as natural language processing and computer vision.
LLMs present a significant potential for analysis of time series data, benefiting domains such as climate, IoT, healthcare, traffic, audio and finance.
arXiv Detail & Related papers (2024-02-02T07:24:35Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Self-Supervised Learning for Time Series Analysis: Taxonomy, Progress, and Prospects [84.6945070729684]
Self-supervised learning (SSL) has recently achieved impressive performance on various time series tasks.
This article reviews current state-of-the-art SSL methods for time series data.
arXiv Detail & Related papers (2023-06-16T18:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.