On Error Correction Neural Networks for Economic Forecasting
- URL: http://arxiv.org/abs/2004.05277v2
- Date: Mon, 1 Jun 2020 17:21:32 GMT
- Title: On Error Correction Neural Networks for Economic Forecasting
- Authors: Mhlasakululeka Mvubu, Emmanuel Kabuga, Christian Plitz, Bubacarr Bah,
Ronnie Becker, Hans Georg Zimmermann
- Abstract summary: A class of RNNs called Error Correction Neural Networks (ECNNs) was designed to compensate for missing input variables.
It does this by feeding back in the current step the error made in the previous step.
The ECNN is implemented in Python by the computation of the appropriate gradients and it is tested on stock market predictions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recurrent neural networks (RNNs) are more suitable for learning non-linear
dependencies in dynamical systems from observed time series data. In practice
all the external variables driving such systems are not known a priori,
especially in economical forecasting. A class of RNNs called Error Correction
Neural Networks (ECNNs) was designed to compensate for missing input variables.
It does this by feeding back in the current step the error made in the previous
step. The ECNN is implemented in Python by the computation of the appropriate
gradients and it is tested on stock market predictions. As expected it out
performed the simple RNN and LSTM and other hybrid models which involve a
de-noising pre-processing step. The intuition for the latter is that de-noising
may lead to loss of information.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Knowing When to Stop: Delay-Adaptive Spiking Neural Network Classifiers with Reliability Guarantees [36.14499894307206]
Spiking neural networks (SNNs) process time-series data via internal event-driven neural dynamics.
We introduce a novel delay-adaptive SNN-based inference methodology that provides guaranteed reliability for the decisions produced at input-dependent stopping times.
arXiv Detail & Related papers (2023-05-18T22:11:04Z) - Certified machine learning: A posteriori error estimation for
physics-informed neural networks [0.0]
PINNs are known to be robust for smaller training sets, derive better generalization problems, and are faster to train.
We show that using PINNs in comparison with purely data-driven neural networks is not only favorable for training performance but allows us to extract significant information on the quality of the approximated solution.
arXiv Detail & Related papers (2022-03-31T14:23:04Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Handling Missing Observations with an RNN-based Prediction-Update Cycle [10.478312054103975]
In tasks such as tracking, time-series data inevitably carry missing observations.
This paper introduces an RNN-based approach that provides a full temporal filtering cycle for motion state estimation.
arXiv Detail & Related papers (2021-03-22T11:55:10Z) - Auditory Attention Decoding from EEG using Convolutional Recurrent
Neural Network [20.37214453938965]
The auditory attention decoding (AAD) approach was proposed to determine the identity of the attended talker in a multi-talker scenario.
Recent models based on deep neural networks (DNN) have been proposed to solve this problem.
In this paper, we proposed novel convolutional recurrent neural network (CRNN) based regression model and classification model.
arXiv Detail & Related papers (2021-03-03T05:09:40Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.