Long Short-term Cognitive Networks
- URL: http://arxiv.org/abs/2106.16233v1
- Date: Wed, 30 Jun 2021 17:42:09 GMT
- Title: Long Short-term Cognitive Networks
- Authors: Gonzalo N\'apoles, Isel Grau, Agnieszka Jastrzebska, Yamisleydi
Salgueiro
- Abstract summary: We present a recurrent neural system named Long Short-term Cognitive Networks (LSTCNs) as a generalisation of the Short-term Cognitive Network (STCN) model.
Our neural system reports small forecasting errors while being up to thousands of times faster than state-of-the-art recurrent models.
- Score: 2.2748974006378933
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present a recurrent neural system named Long Short-term
Cognitive Networks (LSTCNs) as a generalisation of the Short-term Cognitive
Network (STCN) model. Such a generalisation is motivated by the difficulty of
forecasting very long time series in an efficient, greener fashion. The LSTCN
model can be defined as a collection of STCN blocks, each processing a specific
time patch of the (multivariate) time series being modelled. In this neural
ensemble, each block passes information to the subsequent one in the form of a
weight matrix referred to as the prior knowledge matrix. As a second
contribution, we propose a deterministic learning algorithm to compute the
learnable weights while preserving the prior knowledge resulting from previous
learning processes. As a third contribution, we introduce a feature influence
score as a proxy to explain the forecasting process in multivariate time
series. The simulations using three case studies show that our neural system
reports small forecasting errors while being up to thousands of times faster
than state-of-the-art recurrent models.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series [0.3277163122167434]
This paper proposes a novel modular neural network model for time series prediction that is interpretable by construction.
A recurrent neural network learns the temporal dependencies in the data while an attention-based feature selection component selects the most relevant features.
A modular deep network is trained from the selected features independently to show the users how features influence outcomes, making the model interpretable.
arXiv Detail & Related papers (2023-11-28T14:51:06Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Signal Temporal Logic through Neural Network for Interpretable
Classification [13.829082181692872]
We propose an explainable neural-symbolic framework for the classification of time-series behaviors.
We demonstrate the computational efficiency, compactness, and interpretability of the proposed method through driving scenarios and naval surveillance case studies.
arXiv Detail & Related papers (2022-10-04T21:11:54Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Evaluation of deep learning models for multi-step ahead time series
prediction [1.3764085113103222]
We present an evaluation study that compares the performance of deep learning models for multi-step ahead time series prediction.
Our deep learning methods compromise of simple recurrent neural networks, long short term memory (LSTM) networks, bidirectional LSTM, encoder-decoder LSTM networks, and convolutional neural networks.
arXiv Detail & Related papers (2021-03-26T04:07:11Z) - Meta-Learning for Koopman Spectral Analysis with Short Time-series [49.41640137945938]
Existing methods require long time-series for training neural networks.
We propose a meta-learning method for estimating embedding functions from unseen short time-series.
We experimentally demonstrate that the proposed method achieves better performance in terms of eigenvalue estimation and future prediction.
arXiv Detail & Related papers (2021-02-09T07:19:19Z) - Ensemble perspective for understanding temporal credit assignment [1.9843222704723809]
We show that each individual connection in recurrent neural networks is modeled by a spike and slab distribution, rather than a precise weight value.
Our model reveals important connections that determine the overall performance of the network.
It is thus promising to study the temporal credit assignment in recurrent neural networks from the ensemble perspective.
arXiv Detail & Related papers (2021-02-07T08:14:05Z) - HiPPO: Recurrent Memory with Optimal Polynomial Projections [93.3537706398653]
We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto bases.
Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem.
This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale.
arXiv Detail & Related papers (2020-08-17T23:39:33Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.