Zero-shot and few-shot time series forecasting with ordinal regression
recurrent neural networks
- URL: http://arxiv.org/abs/2003.12162v1
- Date: Thu, 26 Mar 2020 21:33:10 GMT
- Title: Zero-shot and few-shot time series forecasting with ordinal regression
recurrent neural networks
- Authors: Bernardo P\'erez Orozco and Stephen J Roberts
- Abstract summary: Recurrent neural networks (RNNs) are state-of-the-art in several sequential learning tasks, but they often require considerable amounts of data to generalise well.
We propose a novel RNN-based model that directly addresses this problem by learning a shared feature embedding over the space of many quantised time series.
We show how this enables our RNN framework to accurately and reliably forecast unseen time series, even when there is little to no training data available.
- Score: 17.844338213026976
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recurrent neural networks (RNNs) are state-of-the-art in several sequential
learning tasks, but they often require considerable amounts of data to
generalise well. For many time series forecasting (TSF) tasks, only a few
dozens of observations may be available at training time, which restricts use
of this class of models. We propose a novel RNN-based model that directly
addresses this problem by learning a shared feature embedding over the space of
many quantised time series. We show how this enables our RNN framework to
accurately and reliably forecast unseen time series, even when there is little
to no training data available.
Related papers
- TCGPN: Temporal-Correlation Graph Pre-trained Network for Stock Forecasting [1.864621482724548]
We propose a novel approach called the Temporal-Correlation Graph Pre-trained Network (TCGPN) to address these limitations.
TCGPN utilize Temporal-correlation fusion encoder to get a mixed representation and pre-training method with carefully designed temporal and correlation pre-training tasks.
Experiments are conducted on real stock market data sets CSI300 and CSI500 that exhibit minimal periodicity.
arXiv Detail & Related papers (2024-07-26T05:27:26Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Rapid training of quantum recurrent neural network [26.087244189340858]
We propose a Quantum Recurrent Neural Network (QRNN) to address these obstacles.
The design of the network is based on the continuous-variable quantum computing paradigm.
Our numerical simulations show that the QRNN converges to optimal weights in fewer epochs than the classical network.
arXiv Detail & Related papers (2022-07-01T12:29:33Z) - Pre-training Enhanced Spatial-temporal Graph Neural Network for
Multivariate Time Series Forecasting [13.441945545904504]
We propose a novel framework, in which STGNN is Enhanced by a scalable time series Pre-training model (STEP)
Specifically, we design a pre-training model to efficiently learn temporal patterns from very long-term history time series.
Our framework is capable of significantly enhancing downstream STGNNs, and our pre-training model aptly captures temporal patterns.
arXiv Detail & Related papers (2022-06-18T04:24:36Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Learning from Irregularly-Sampled Time Series: A Missing Data
Perspective [18.493394650508044]
Irregularly-sampled time series occur in many domains including healthcare.
We model irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function.
We propose learning methods for this framework based on variational autoencoders and generative adversarial networks.
arXiv Detail & Related papers (2020-08-17T20:01:55Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.