PromptCast: A New Prompt-based Learning Paradigm for Time Series
Forecasting
- URL: http://arxiv.org/abs/2210.08964v5
- Date: Sun, 10 Dec 2023 23:53:41 GMT
- Title: PromptCast: A New Prompt-based Learning Paradigm for Time Series
Forecasting
- Authors: Hao Xue and Flora D. Salim
- Abstract summary: In existing time series forecasting methods, the models take a sequence of numerical values as input and yield numerical values as output.
Inspired by the successes of pre-trained language foundation models, we propose a new forecasting paradigm: prompt-based time series forecasting.
In this novel task, the numerical input and output are transformed into prompts and the forecasting task is framed in a sentence-to-sentence manner.
- Score: 11.670324826998968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a new perspective on time series forecasting. In existing
time series forecasting methods, the models take a sequence of numerical values
as input and yield numerical values as output. The existing SOTA models are
largely based on the Transformer architecture, modified with multiple encoding
mechanisms to incorporate the context and semantics around the historical data.
Inspired by the successes of pre-trained language foundation models, we pose a
question about whether these models can also be adapted to solve time-series
forecasting. Thus, we propose a new forecasting paradigm: prompt-based time
series forecasting (PromptCast). In this novel task, the numerical input and
output are transformed into prompts and the forecasting task is framed in a
sentence-to-sentence manner, making it possible to directly apply language
models for forecasting purposes. To support and facilitate the research of this
task, we also present a large-scale dataset (PISA) that includes three
real-world forecasting scenarios. We evaluate different SOTA numerical-based
forecasting methods and language generation models. The benchmark results with
various forecasting settings demonstrate the proposed PromptCast with language
generation models is a promising research direction. Additionally, in
comparison to conventional numerical-based forecasting, PromptCast shows a much
better generalization ability under the zero-shot setting.
Related papers
- Prompt Mining for Language-based Human Mobility Forecasting [10.325794804095889]
We propose a novel framework for prompt mining in language-based mobility forecasting.
The framework includes a prompt generation stage based on the information entropy of prompts and a prompt refinement stage to integrate mechanisms such as the chain of thought.
arXiv Detail & Related papers (2024-03-06T08:43:30Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - AutoTimes: Autoregressive Time Series Forecasters via Large Language Models [67.83502953961505]
We propose AutoTimes as autoregressive time series forecasters, which independently projects time series segments into the embedding space and autoregressively generates future predictions with arbitrary lengths.
AutoTimes achieves state-of-the-art with 0.1% trainable parameters and over 5 times training/inference speedup compared to advanced LLM-based forecasters.
arXiv Detail & Related papers (2024-02-04T06:59:21Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - A decoder-only foundation model for time-series forecasting [23.824504640087753]
Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus.
It can work well across different forecasting history lengths, prediction lengths and temporal granularities.
arXiv Detail & Related papers (2023-10-14T17:01:37Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Leveraging Language Foundation Models for Human Mobility Forecasting [8.422257363944295]
We propose a novel pipeline that leverages language foundation models for temporal sequential pattern mining.
We perform the forecasting task directly on the natural language input that includes all kinds of information.
Specific prompts are introduced to transform numerical temporal sequences into sentences so that existing language models can be directly applied.
arXiv Detail & Related papers (2022-09-11T01:15:16Z) - A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis [90.24921443175514]
We focus on aspect-based sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities.
We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention.
Our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
arXiv Detail & Related papers (2022-04-11T18:31:53Z) - Forecast with Forecasts: Diversity Matters [9.66075743192747]
In recent years, the idea of using time series features to construct forecast combination model has flourished in the forecasting area.
In this work, we suggest a change of focus from the historical data to the produced forecasts to extract features.
We calculate the diversity of a pool of models based on the corresponding forecasts as a decisive feature and use meta-learning to construct diversity-based forecast combination models.
arXiv Detail & Related papers (2020-12-03T02:14:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.