Few-shot Learning for Time-series Forecasting
- URL: http://arxiv.org/abs/2009.14379v1
- Date: Wed, 30 Sep 2020 01:32:22 GMT
- Title: Few-shot Learning for Time-series Forecasting
- Authors: Tomoharu Iwata, Atsutoshi Kumagai
- Abstract summary: We propose a few-shot learning method that forecasts a future value of a time-series in a target task given a few time-series in the target task.
Our model is trained using time-series data in multiple training tasks that are different from target tasks.
- Score: 40.58524521473793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-series forecasting is important for many applications. Forecasting
models are usually trained using time-series data in a specific target task.
However, sufficient data in the target task might be unavailable, which leads
to performance degradation. In this paper, we propose a few-shot learning
method that forecasts a future value of a time-series in a target task given a
few time-series in the target task. Our model is trained using time-series data
in multiple training tasks that are different from target tasks. Our model uses
a few time-series to build a forecasting function based on a recurrent neural
network with an attention mechanism. With the attention mechanism, we can
retrieve useful patterns in a small number of time-series for the current
situation. Our model is trained by minimizing an expected test error of
forecasting next timestep values. We demonstrate the effectiveness of the
proposed method using 90 time-series datasets.
Related papers
- Beyond Data Scarcity: A Frequency-Driven Framework for Zero-Shot Forecasting [15.431513584239047]
Time series forecasting is critical in numerous real-world applications.
Traditional forecasting techniques struggle when data is scarce or not available at all.
Recent advancements often leverage large-scale foundation models for such tasks.
arXiv Detail & Related papers (2024-11-24T07:44:39Z) - Chronos: Learning the Language of Time Series [79.38691251254173]
Chronos is a framework for pretrained probabilistic time series models.
We show that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks.
arXiv Detail & Related papers (2024-03-12T16:53:54Z) - Probing the Robustness of Time-series Forecasting Models with
CounterfacTS [1.823020744088554]
We present and publicly release CounterfacTS, a tool to probe the robustness of deep learning models in time-series forecasting tasks.
CounterfacTS has a user-friendly interface that allows the user to visualize, compare and quantify time series data and their forecasts.
arXiv Detail & Related papers (2024-03-06T07:34:47Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - An End-to-End Time Series Model for Simultaneous Imputation and Forecast [14.756607742477252]
We develop an end-to-end time series model that aims to learn the inference relation and make a multiple-step ahead forecast.
Our framework trains jointly two neural networks, one to learn the feature-wise correlations and the other for the modeling of temporal behaviors.
arXiv Detail & Related papers (2023-06-01T15:08:22Z) - A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis [90.24921443175514]
We focus on aspect-based sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities.
We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention.
Our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
arXiv Detail & Related papers (2022-04-11T18:31:53Z) - Meta-Learning for Koopman Spectral Analysis with Short Time-series [49.41640137945938]
Existing methods require long time-series for training neural networks.
We propose a meta-learning method for estimating embedding functions from unseen short time-series.
We experimentally demonstrate that the proposed method achieves better performance in terms of eigenvalue estimation and future prediction.
arXiv Detail & Related papers (2021-02-09T07:19:19Z) - Time-series Imputation and Prediction with Bi-Directional Generative
Adversarial Networks [0.3162999570707049]
We present a model for the combined task of imputing and predicting values for irregularly observed and varying length time-series data with missing entries.
Our model learns how to impute missing elements in-between (imputation) or outside of the input time steps (prediction), hence working as an effective any-time prediction tool for time-series data.
arXiv Detail & Related papers (2020-09-18T15:47:51Z) - Zero-shot and few-shot time series forecasting with ordinal regression
recurrent neural networks [17.844338213026976]
Recurrent neural networks (RNNs) are state-of-the-art in several sequential learning tasks, but they often require considerable amounts of data to generalise well.
We propose a novel RNN-based model that directly addresses this problem by learning a shared feature embedding over the space of many quantised time series.
We show how this enables our RNN framework to accurately and reliably forecast unseen time series, even when there is little to no training data available.
arXiv Detail & Related papers (2020-03-26T21:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.