Few-Shot Forecasting of Time-Series with Heterogeneous Channels
- URL: http://arxiv.org/abs/2204.03456v1
- Date: Thu, 7 Apr 2022 14:02:15 GMT
- Title: Few-Shot Forecasting of Time-Series with Heterogeneous Channels
- Authors: Lukas Brinkmeyer and Rafael Rego Drumond and Johannes Burchert and
Lars Schmidt-Thieme
- Abstract summary: We develop a model composed of permutation-invariant deep set-blocks which incorporate a temporal embedding.
We show through experiments that our model provides a good generalization, outperforming baselines carried over from simpler scenarios.
- Score: 4.635820333232681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning complex time series forecasting models usually requires a large
amount of data, as each model is trained from scratch for each task/data set.
Leveraging learning experience with similar datasets is a well-established
technique for classification problems called few-shot classification. However,
existing approaches cannot be applied to time-series forecasting because i)
multivariate time-series datasets have different channels and ii) forecasting
is principally different from classification. In this paper we formalize the
problem of few-shot forecasting of time-series with heterogeneous channels for
the first time. Extending recent work on heterogeneous attributes in vector
data, we develop a model composed of permutation-invariant deep set-blocks
which incorporate a temporal embedding. We assemble the first meta-dataset of
40 multivariate time-series datasets and show through experiments that our
model provides a good generalization, outperforming baselines carried over from
simpler scenarios that either fail to learn across tasks or miss temporal
information.
Related papers
- DAM: Towards A Foundation Model for Time Series Forecasting [0.8231118867997028]
We propose a neural model that takes randomly sampled histories and outputs an adjustable basis composition as a continuous function of time.
It involves three key components: (1) a flexible approach for using randomly sampled histories from a long-tail distribution; (2) a transformer backbone that is trained on these actively sampled histories to produce, as representational output; and (3) the basis coefficients of a continuous function of time.
arXiv Detail & Related papers (2024-07-25T08:48:07Z) - UniCL: A Universal Contrastive Learning Framework for Large Time Series Models [18.005358506435847]
Time-series analysis plays a pivotal role across a range of critical applications, from finance to healthcare.
Traditional supervised learning methods first annotate extensive labels for time-series data in each task.
This paper introduces UniCL, a universal and scalable contrastive learning framework designed for pretraining time-series foundation models.
arXiv Detail & Related papers (2024-05-17T07:47:11Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - MECATS: Mixture-of-Experts for Quantile Forecasts of Aggregated Time
Series [11.826510794042548]
We introduce a mixture of heterogeneous experts framework called textttMECATS.
It simultaneously forecasts the values of a set of time series that are related through an aggregation hierarchy.
Different types of forecasting models can be employed as individual experts so that the form of each model can be tailored to the nature of the corresponding time series.
arXiv Detail & Related papers (2021-12-22T05:05:30Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Monash Time Series Forecasting Archive [6.0617755214437405]
We present a comprehensive time series forecasting archive containing 20 publicly available time series datasets from varied domains.
We characterise the datasets, and identify similarities and differences among them, by conducting a feature analysis.
We present the performance of a set of standard baseline forecasting methods over all datasets across eight error metrics.
arXiv Detail & Related papers (2021-05-14T04:49:58Z) - Global Models for Time Series Forecasting: A Simulation Study [2.580765958706854]
We simulate time series from simple data generating processes (DGP), such as Auto Regressive (AR) and Seasonal AR, to complex DGPs, such as Chaotic Logistic Map, Self-Exciting Threshold Auto-Regressive, and Mackey-Glass equations.
The lengths and the number of series in the dataset are varied in different scenarios.
We perform experiments on these datasets using global forecasting models including Recurrent Neural Networks (RNN), Feed-Forward Neural Networks, Pooled Regression (PR) models, and Light Gradient Boosting Models (LGBM)
arXiv Detail & Related papers (2020-12-23T04:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.