VQ-AR: Vector Quantized Autoregressive Probabilistic Time Series
Forecasting
- URL: http://arxiv.org/abs/2205.15894v1
- Date: Tue, 31 May 2022 15:43:46 GMT
- Title: VQ-AR: Vector Quantized Autoregressive Probabilistic Time Series
Forecasting
- Authors: Kashif Rasul, Young-Jin Park, Max Nihl\'en Ramstr\"om, Kyung-Min Kim
- Abstract summary: Time series models aim for accurate predictions of the future given the past, where the forecasts are used for important downstream tasks like business decision making.
In this paper, we introduce a novel autoregressive architecture, VQ-AR, which instead learns a emphdiscrete set of representations that are used to predict the future.
- Score: 10.605719154114354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series models aim for accurate predictions of the future given the past,
where the forecasts are used for important downstream tasks like business
decision making. In practice, deep learning based time series models come in
many forms, but at a high level learn some continuous representation of the
past and use it to output point or probabilistic forecasts. In this paper, we
introduce a novel autoregressive architecture, VQ-AR, which instead learns a
\emph{discrete} set of representations that are used to predict the future.
Extensive empirical comparison with other competitive deep learning models
shows that surprisingly such a discrete set of representations gives
state-of-the-art or equivalent results on a wide variety of time series
datasets. We also highlight the shortcomings of this approach, explore its
zero-shot generalization capabilities, and present an ablation study on the
number of representations. The full source code of the method will be available
at the time of publication with the hope that researchers can further
investigate this important but overlooked inductive bias for the time series
domain.
Related papers
- TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Contrastive Difference Predictive Coding [79.74052624853303]
We introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events.
We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL.
arXiv Detail & Related papers (2023-10-31T03:16:32Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - MPR-Net:Multi-Scale Pattern Reproduction Guided Universality Time Series
Interpretable Forecasting [13.790498420659636]
Time series forecasting has received wide interest from existing research due to its broad applications inherent challenging.
This paper proposes a forecasting model, MPR-Net. It first adaptively decomposes multi-scale historical series patterns using convolution operation, then constructs a pattern extension forecasting method based on the prior knowledge of pattern reproduction, and finally reconstructs future patterns into future series using deconvolution operation.
By leveraging the temporal dependencies present in the time series, MPR-Net not only achieves linear time complexity, but also makes the forecasting process interpretable.
arXiv Detail & Related papers (2023-07-13T13:16:01Z) - Ripple: Concept-Based Interpretation for Raw Time Series Models in
Education [5.374524134699487]
Time series is the most prevalent form of input data for educational prediction tasks.
We propose an approach that utilizes irregular multivariate time series modeling with graph neural networks to achieve comparable or better accuracy.
We analyze these advances in the education domain, addressing the task of early student performance prediction.
arXiv Detail & Related papers (2022-12-02T12:26:00Z) - Split Time Series into Patches: Rethinking Long-term Series Forecasting
with Dateformer [17.454822366228335]
Time is one of the most significant characteristics of time-series, yet has received insufficient attention.
We propose Dateformer who turns attention to modeling time instead of following the above practice.
Dateformer yields state-of-the-art accuracy with a 40% remarkable relative improvement, and broadens the maximum credible forecasting range to a half-yearly level.
arXiv Detail & Related papers (2022-07-12T08:58:44Z) - Series Saliency: Temporal Interpretation for Multivariate Time Series
Forecasting [30.054015098590874]
We present the series saliency framework for temporal interpretation for time series forecasting.
By extracting the "series images" from the sliding windows of the time series, we apply the saliency map segmentation.
Our framework generates temporal interpretations for the time series forecasting task while produces accurate time series forecast.
arXiv Detail & Related papers (2020-12-16T23:48:00Z) - timeXplain -- A Framework for Explaining the Predictions of Time Series
Classifiers [3.6433472230928428]
We present novel domain mappings for the time domain, frequency domain, and time series statistics.
We analyze their explicative power as well as their limits.
We employ a novel evaluation metric to experimentally compare timeXplain to several model-specific explanation approaches.
arXiv Detail & Related papers (2020-07-15T10:32:43Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.