MultiCast: Zero-Shot Multivariate Time Series Forecasting Using LLMs
- URL: http://arxiv.org/abs/2405.14748v1
- Date: Thu, 23 May 2024 16:16:00 GMT
- Title: MultiCast: Zero-Shot Multivariate Time Series Forecasting Using LLMs
- Authors: Georgios Chatzigeorgakidis, Konstantinos Lentzos, Dimitrios Skoutas,
- Abstract summary: MultiCast is a zero-shot LLM-based approach for multivariate time series forecasting.
Three novel token multiplexing solutions effectively reduce dimensionality while preserving key repetitive patterns.
We showcase the performance of our approach in terms of RMSE and execution time against state-of-the-art approaches on three real-world datasets.
- Score: 0.8329456268842227
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Predicting future values in multivariate time series is vital across various domains. This work explores the use of large language models (LLMs) for this task. However, LLMs typically handle one-dimensional data. We introduce MultiCast, a zero-shot LLM-based approach for multivariate time series forecasting. It allows LLMs to receive multivariate time series as input, through three novel token multiplexing solutions that effectively reduce dimensionality while preserving key repetitive patterns. Additionally, a quantization scheme helps LLMs to better learn these patterns, while significantly reducing token use for practical applications. We showcase the performance of our approach in terms of RMSE and execution time against state-of-the-art approaches on three real-world datasets.
Related papers
- Position: Empowering Time Series Reasoning with Multimodal LLMs [49.73647759532127]
We argue that multimodal language models (MLLMs) can enable more powerful and flexible reasoning for time series analysis.
We call on researchers and practitioners to leverage this potential by developing strategies that prioritize trust, interpretability, and robust reasoning in MLLMs.
arXiv Detail & Related papers (2025-02-03T16:10:48Z) - Large Language Models are Few-shot Multivariate Time Series Classifiers [23.045734479292356]
Large Language Models (LLMs) have been extensively applied in time series analysis.
Yet, their utility in the few-shot classification (i.e., a crucial training scenario) is underexplored.
We aim to leverage the extensive pre-trained knowledge in LLMs to overcome the data scarcity problem.
arXiv Detail & Related papers (2025-01-30T03:59:59Z) - Using Pre-trained LLMs for Multivariate Time Series Forecasting [41.67881279885103]
Pre-trained Large Language Models (LLMs) encapsulate large amounts of knowledge and take enormous amounts of compute to train.
We make use of this resource, together with the observation that LLMs are able to transfer knowledge and performance from one domain or even modality to another seemingly-unrelated area.
arXiv Detail & Related papers (2025-01-10T23:30:23Z) - LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting [0.08795040582681389]
LLM-Mixer is a framework that improves forecasting accuracy through the combination of multiscale time-series decomposition with pre-trained LLMs.
It captures both short-term fluctuations and long-term trends by decomposing the data into multiple temporal resolutions.
arXiv Detail & Related papers (2024-10-15T15:08:57Z) - Towards Time Series Reasoning with LLMs [0.4369058206183195]
We propose a novel multi-modal time-series LLM approach that learns generalizable information across various domains with powerful zero-shot performance.
We show that our model learns a latent representation that reflects specific time-series features, as well as outperforming GPT-4o on a set of zero-shot reasoning tasks.
arXiv Detail & Related papers (2024-09-17T17:23:44Z) - SoupLM: Model Integration in Large Language and Multi-Modal Models [51.12227693121004]
Training large language models (LLMs) requires significant computing resources.
Existing publicly available LLMs are typically pre-trained on diverse, privately curated datasets spanning various tasks.
arXiv Detail & Related papers (2024-07-11T05:38:15Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
AutoTimes projects time series into the embedding space of language tokens and autoregressively generates future predictions with arbitrary lengths.
We formulate time series as prompts, extending the context for prediction beyond the lookback window.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over $5times$ training/inference speedup.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Large Language Models Are Zero-Shot Time Series Forecasters [48.73953666153385]
By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text.
We find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-shot extrapolate time series at a level comparable to or exceeding the performance of purpose-built time series models trained on the downstream tasks.
arXiv Detail & Related papers (2023-10-11T19:01:28Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - LLM-Pruner: On the Structural Pruning of Large Language Models [65.02607075556742]
Large language models (LLMs) have shown remarkable capabilities in language understanding and generation.
We tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset.
Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures.
arXiv Detail & Related papers (2023-05-19T12:10:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.