Evolutionary Echo State Network: evolving reservoirs in the Fourier
space
- URL: http://arxiv.org/abs/2206.04951v1
- Date: Fri, 10 Jun 2022 08:59:40 GMT
- Title: Evolutionary Echo State Network: evolving reservoirs in the Fourier
space
- Authors: Sebastian Basterrech, Gerardo Rubino
- Abstract summary: The Echo State Network (ESN) is a class of Recurrent Neural Network with a large number of hidden-hidden weights (in the so-called reservoir)
We propose a new computational model of the ESN type, that represents the reservoir weights in the Fourier space and performs a fine-tuning of these weights applying genetic algorithms in the frequency domain.
- Score: 1.7658686315825685
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Echo State Network (ESN) is a class of Recurrent Neural Network with a
large number of hidden-hidden weights (in the so-called reservoir). Canonical
ESN and its variations have recently received significant attention due to
their remarkable success in the modeling of non-linear dynamical systems. The
reservoir is randomly connected with fixed weights that don't change in the
learning process. Only the weights from reservoir to output are trained. Since
the reservoir is fixed during the training procedure, we may wonder if the
computational power of the recurrent structure is fully harnessed. In this
article, we propose a new computational model of the ESN type, that represents
the reservoir weights in the Fourier space and performs a fine-tuning of these
weights applying genetic algorithms in the frequency domain. The main interest
is that this procedure will work in a much smaller space compared to the
classical ESN, thus providing a dimensionality reduction transformation of the
initial method. The proposed technique allows us to exploit the benefits of the
large recurrent structure avoiding the training problems of gradient-based
method. We provide a detailed experimental study that demonstrates the good
performances of our approach with well-known chaotic systems and real-world
data.
Related papers
- Deep Recurrent Stochastic Configuration Networks for Modelling Nonlinear Dynamic Systems [3.8719670789415925]
This paper proposes a novel deep reservoir computing framework, termed deep recurrent configuration network (DeepRSCN)
DeepRSCNs are incrementally constructed, with all reservoir nodes directly linked to the final output.
Given a set of training samples, DeepRSCNs can quickly generate learning representations, which consist of random basis functions with cascaded input readout weights.
arXiv Detail & Related papers (2024-10-28T10:33:15Z) - Universal Neural Functionals [67.80283995795985]
A challenging problem in many modern machine learning tasks is to process weight-space features.
Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks.
This work proposes an algorithm that automatically constructs permutation equivariant models for any weight space.
arXiv Detail & Related papers (2024-02-07T20:12:27Z) - Data Augmentations in Deep Weight Spaces [89.45272760013928]
We introduce a novel augmentation scheme based on the Mixup method.
We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate.
arXiv Detail & Related papers (2023-11-15T10:43:13Z) - NUPES : Non-Uniform Post-Training Quantization via Power Exponent Search [7.971065005161565]
quantization is a technique to convert floating point representations to low bit-width fixed point representations.
We show how to learn new quantized weights over the entire quantized space.
We show the ability of the method to achieve state-of-the-art compression rates in both, data-free and data-driven configurations.
arXiv Detail & Related papers (2023-08-10T14:19:58Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Neural Functional Transformers [99.98750156515437]
This paper uses the attention mechanism to define a novel set of permutation equivariant weight-space layers called neural functional Transformers (NFTs)
NFTs respect weight-space permutation symmetries while incorporating the advantages of attention, which have exhibited remarkable success across multiple domains.
We also leverage NFTs to develop Inr2Array, a novel method for computing permutation invariant representations from the weights of implicit neural representations (INRs)
arXiv Detail & Related papers (2023-05-22T23:38:27Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - FFNB: Forgetting-Free Neural Blocks for Deep Continual Visual Learning [14.924672048447338]
We devise a dynamic network architecture for continual learning based on a novel forgetting-free neural block (FFNB)
Training FFNB features on new tasks is achieved using a novel procedure that constrains the underlying parameters in the null-space of the previous tasks.
arXiv Detail & Related papers (2021-11-22T17:23:34Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z) - Approximation Bounds for Random Neural Networks and Reservoir Systems [8.143750358586072]
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights.
In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well.
arXiv Detail & Related papers (2020-02-14T09:43:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.