Seasonal Encoder-Decoder Architecture for Forecasting
- URL: http://arxiv.org/abs/2207.04113v1
- Date: Fri, 8 Jul 2022 20:06:45 GMT
- Title: Seasonal Encoder-Decoder Architecture for Forecasting
- Authors: Avinash Achar, Soumen Pachal
- Abstract summary: We propose a novel RNN architecture capturing (stochastic) seasonal correlations intelligently.
It is motivated from the well-known encoder-decoder (ED) architecture and multiplicative seasonal auto-regressive model.
It can be employed on single or multiple sequence data.
- Score: 1.9188864062289432
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning (DL) in general and Recurrent neural networks (RNNs) in
particular have seen high success levels in sequence based applications. This
paper pertains to RNNs for time series modelling and forecasting. We propose a
novel RNN architecture capturing (stochastic) seasonal correlations
intelligently while capable of accurate multi-step forecasting. It is motivated
from the well-known encoder-decoder (ED) architecture and multiplicative
seasonal auto-regressive model. It incorporates multi-step (multi-target)
learning even in the presence (or absence) of exogenous inputs. It can be
employed on single or multiple sequence data. For the multiple sequence case,
we also propose a novel greedy recursive procedure to build (one or more)
predictive models across sequences when per-sequence data is less. We
demonstrate via extensive experiments the utility of our proposed architecture
both in single sequence and multiple sequence scenarios.
Related papers
- SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking [60.109453252858806]
A maximum-likelihood (MLE) objective does not match a downstream use-case of autoregressively generating high-quality sequences.
We formulate sequence generation as an imitation learning (IL) problem.
This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset.
Our resulting method, SequenceMatch, can be implemented without adversarial training or architectural changes.
arXiv Detail & Related papers (2023-06-08T17:59:58Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - DeepSeq: Deep Sequential Circuit Learning [10.402436619244911]
Circuit representation learning is a promising research direction in the electronic design automation (EDA) field.
Existing solutions only target combinational circuits, significantly limiting their applications.
We propose DeepSeq, a novel representation learning framework for sequential netlists.
arXiv Detail & Related papers (2023-02-27T09:17:35Z) - Sequence Prediction Under Missing Data : An RNN Approach Without
Imputation [1.9188864062289432]
This paper pertains to a novel Recurrent Network (RNN) based solution for sequence prediction under missing data.
It tries to encode the missingness patterns in the data directly without trying to impute data either before or during model building.
We focus on forecasting here in a general context of multi-step prediction in presence of possible inputs.
arXiv Detail & Related papers (2022-08-18T16:09:12Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Time Series is a Special Sequence: Forecasting with Sample Convolution
and Interaction [9.449017120452675]
Time series is a special type of sequence data, a set of observations collected at even intervals of time and ordered chronologically.
Existing deep learning techniques use generic sequence models for time series analysis, which ignore some of its unique properties.
We propose a novel neural network architecture and apply it for the time series forecasting problem, wherein we conduct sample convolution and interaction at multiple resolutions for temporal modeling.
arXiv Detail & Related papers (2021-06-17T08:15:04Z) - Boosted Embeddings for Time Series Forecasting [0.6042845803090501]
We propose a novel time series forecast model, DeepGB.
We formulate and implement a variant of Gradient boosting wherein the weak learners are DNNs whose weights are incrementally found in a greedy manner over iterations.
We demonstrate that our model outperforms existing comparable state-of-the-art models using real-world sensor data and public dataset.
arXiv Detail & Related papers (2021-04-10T14:38:11Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Zero-shot and few-shot time series forecasting with ordinal regression
recurrent neural networks [17.844338213026976]
Recurrent neural networks (RNNs) are state-of-the-art in several sequential learning tasks, but they often require considerable amounts of data to generalise well.
We propose a novel RNN-based model that directly addresses this problem by learning a shared feature embedding over the space of many quantised time series.
We show how this enables our RNN framework to accurately and reliably forecast unseen time series, even when there is little to no training data available.
arXiv Detail & Related papers (2020-03-26T21:33:10Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.