Composite FORCE learning of chaotic echo state networks for time-series
prediction
- URL: http://arxiv.org/abs/2207.02420v1
- Date: Wed, 6 Jul 2022 03:44:09 GMT
- Title: Composite FORCE learning of chaotic echo state networks for time-series
prediction
- Authors: Yansong Li, Kai Hu, Kohei Nakajima, and Yongping Pan
- Abstract summary: This paper proposes a composite FORCE learning method to train ESNs whose initial activity is spontaneously chaotic.
numerical results have shown that it significantly improves learning and prediction performances compared with existing methods.
- Score: 7.650966670809372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Echo state network (ESN), a kind of recurrent neural networks, consists of a
fixed reservoir in which neurons are connected randomly and recursively and
obtains the desired output only by training output connection weights.
First-order reduced and controlled error (FORCE) learning is an online
supervised training approach that can change the chaotic activity of ESNs into
specified activity patterns. This paper proposes a composite FORCE learning
method based on recursive least squares to train ESNs whose initial activity is
spontaneously chaotic, where a composite learning technique featured by dynamic
regressor extension and memory data exploitation is applied to enhance
parameter convergence. The proposed method is applied to a benchmark problem
about predicting chaotic time series generated by the Mackey-Glass system, and
numerical results have shown that it significantly improves learning and
prediction performances compared with existing methods.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Recurrent Stochastic Configuration Networks for Temporal Data Analytics [3.8719670789415925]
This paper develops a recurrent version of configuration networks (RSCNs) for problem solving.
We build an initial RSCN model in the light of a supervisory mechanism, followed by an online update of the output weights.
Numerical results clearly indicate that the proposed RSCN performs favourably over all of the datasets.
arXiv Detail & Related papers (2024-06-21T03:21:22Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Learning in Feedback-driven Recurrent Spiking Neural Networks using
full-FORCE Training [4.124948554183487]
We propose a supervised training procedure for RSNNs, where a second network is introduced only during the training.
The proposed training procedure consists of generating targets for both recurrent and readout layers.
We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems.
arXiv Detail & Related papers (2022-05-26T19:01:19Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - On the adaptation of recurrent neural networks for system identification [2.5234156040689237]
This paper presents a transfer learning approach which enables fast and efficient adaptation of Recurrent Neural Network (RNN) models of dynamical systems.
The system dynamics are then assumed to change, leading to an unacceptable degradation of the nominal model performance on the perturbed system.
To cope with the mismatch, the model is augmented with an additive correction term trained on fresh data from the new dynamic regime.
arXiv Detail & Related papers (2022-01-21T12:04:17Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - R-FORCE: Robust Learning for Random Recurrent Neural Networks [6.285241353736006]
We propose a robust training method to enhance robustness of RRNN.
FORCE learning approach was shown to be applicable even for the challenging task of target-learning.
Our experiments indicate that R-FORCE facilitates significantly more stable and accurate target-learning for a wide class of RRNN.
arXiv Detail & Related papers (2020-03-25T22:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.