ARMA Cell: A Modular and Effective Approach for Neural Autoregressive
Modeling
- URL: http://arxiv.org/abs/2208.14919v2
- Date: Thu, 11 Jan 2024 18:59:23 GMT
- Title: ARMA Cell: A Modular and Effective Approach for Neural Autoregressive
Modeling
- Authors: Philipp Schiele and Christoph Berninger and David R\"ugamer
- Abstract summary: We introduce the ARMA cell, a simpler, modular, and effective approach for time series modeling in neural networks.
Our experiments show that the proposed methodology is competitive with popular alternatives in terms of performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The autoregressive moving average (ARMA) model is a classical, and arguably
one of the most studied approaches to model time series data. It has compelling
theoretical properties and is widely used among practitioners. More recent deep
learning approaches popularize recurrent neural networks (RNNs) and, in
particular, Long Short-Term Memory (LSTM) cells that have become one of the
best performing and most common building blocks in neural time series modeling.
While advantageous for time series data or sequences with long-term effects,
complex RNN cells are not always a must and can sometimes even be inferior to
simpler recurrent approaches. In this work, we introduce the ARMA cell, a
simpler, modular, and effective approach for time series modeling in neural
networks. This cell can be used in any neural network architecture where
recurrent structures are present and naturally handles multivariate time series
using vector autoregression. We also introduce the ConvARMA cell as a natural
successor for spatially-correlated time series. Our experiments show that the
proposed methodology is competitive with popular alternatives in terms of
performance while being more robust and compelling due to its simplicity
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - CARRNN: A Continuous Autoregressive Recurrent Neural Network for Deep
Representation Learning from Sporadic Temporal Data [1.8352113484137622]
In this paper, a novel deep learning-based model is developed for modeling multiple temporal features in sporadic data.
The proposed model, called CARRNN, uses a generalized discrete-time autoregressive model that is trainable end-to-end using neural networks modulated by time lags.
It is applied to multivariate time-series regression tasks using data provided for Alzheimer's disease progression modeling and intensive care unit (ICU) mortality rate prediction.
arXiv Detail & Related papers (2021-04-08T12:43:44Z) - Automatic Remaining Useful Life Estimation Framework with Embedded
Convolutional LSTM as the Backbone [5.927250637620123]
We propose a new LSTM variant called embedded convolutional LSTM (E NeuralTM)
In ETM a group of different 1D convolutions is embedded into the LSTM structure. Through this, the temporal information is preserved between and within windows.
We show the superiority of our proposed ETM approach over the state-of-the-art approaches on several widely used benchmark data sets for RUL Estimation.
arXiv Detail & Related papers (2020-08-10T08:34:20Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Learning Various Length Dependence by Dual Recurrent Neural Networks [0.0]
We propose a new model named Dual Recurrent Neural Networks (DuRNN)
DuRNN consists of two parts to learn the short-term dependence and progressively learn the long-term dependence.
Our contributions are: 1) a new recurrent model developed based on the divide-and-conquer strategy to learn long and short-term dependence separately, and 2) a selection mechanism to enhance the separating and learning of different temporal scales of dependence.
arXiv Detail & Related papers (2020-05-28T09:30:01Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.