Investigation of Proper Orthogonal Decomposition for Echo State Networks
- URL: http://arxiv.org/abs/2211.17179v3
- Date: Fri, 26 May 2023 17:41:25 GMT
- Title: Investigation of Proper Orthogonal Decomposition for Echo State Networks
- Authors: Jean Panaioti Jordanou, Eric Aislan Antonelo, Eduardo Camponogara,
Eduardo Gildin
- Abstract summary: Echo State Networks (ESN) are a type of Recurrent Neural Network that yields promising results in representing time series and nonlinear dynamic systems.
A large number of states not only makes the time-step computation more costly but also may pose robustness issues.
One way to circumvent this complexity issue is through Model Order Reduction strategies such as the Proper Orthogonal Decomposition (POD) and its variants (POD-DEIM)
- Score: 3.645570876484422
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Echo State Networks (ESN) are a type of Recurrent Neural Network that yields
promising results in representing time series and nonlinear dynamic systems.
Although they are equipped with a very efficient training procedure, Reservoir
Computing strategies, such as the ESN, require high-order networks, i.e., many
neurons, resulting in a large number of states that are magnitudes higher than
the number of model inputs and outputs. A large number of states not only makes
the time-step computation more costly but also may pose robustness issues,
especially when applying ESNs to problems such as Model Predictive Control
(MPC) and other optimal control problems. One way to circumvent this complexity
issue is through Model Order Reduction strategies such as the Proper Orthogonal
Decomposition (POD) and its variants (POD-DEIM), whereby we find an equivalent
lower order representation to an already trained high dimension ESN. To this
end, this work aims to investigate and analyze the performance of POD methods
in Echo State Networks, evaluating their effectiveness through the Memory
Capacity (MC) of the POD-reduced network compared to the original (full-order)
ESN. We also perform experiments on two numerical case studies: a NARMA10
difference equation and an oil platform containing two wells and one riser. The
results show that there is little loss of performance comparing the original
ESN to a POD-reduced counterpart and that the performance of a POD-reduced ESN
tends to be superior to a normal ESN of the same size. Also, the POD-reduced
network achieves speedups of around $80\%$ compared to the original ESN.
Related papers
- Recurrent Stochastic Configuration Networks for Temporal Data Analytics [3.8719670789415925]
This paper develops a recurrent version of configuration networks (RSCNs) for problem solving.
We build an initial RSCN model in the light of a supervisory mechanism, followed by an online update of the output weights.
Numerical results clearly indicate that the proposed RSCN performs favourably over all of the datasets.
arXiv Detail & Related papers (2024-06-21T03:21:22Z) - Parallel Spiking Unit for Efficient Training of Spiking Neural Networks [8.912926151352888]
Spiking Neural Networks (SNNs) are used to advance artificial intelligence.
SNNs are hampered by their inherent sequential computational dependency.
This paper introduces the innovative Parallel Spiking Unit (PSU) and its two derivatives, the Input-aware PSU (IPSU) and Reset-aware PSU (RPSU)
These variants skillfully decouple the leaky integration and firing mechanisms in spiking neurons while probabilistically managing the reset process.
arXiv Detail & Related papers (2024-02-01T09:36:26Z) - Improving the Performance of Echo State Networks Through Feedback [0.0]
Reservoir computing, using nonlinear dynamical systems, offers a cost-effective alternative to neural networks.
A potential drawback of ESNs is that the fixed reservoir may not offer the complexity needed for specific problems.
In this paper, we demonstrate that by feeding some component of the reservoir state back into the network through the input, we can drastically improve upon the performance of a given ESN.
arXiv Detail & Related papers (2023-12-23T02:34:50Z) - Regularization of polynomial networks for image recognition [78.4786845859205]
Polynomial Networks (PNs) have emerged as an alternative method with a promising performance and improved interpretability.
We introduce a class of PNs, which are able to reach the performance of ResNet across a range of six benchmarks.
arXiv Detail & Related papers (2023-03-24T10:05:22Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Inducing Early Neural Collapse in Deep Neural Networks for Improved
Out-of-Distribution Detection [0.9558392439655015]
We propose a simple modification to standard ResNet architectures--L2 regularization over feature space--that substantially improves out-of-distribution (OoD) performance.
This change also induces early Neural Collapse (NC), which we show is an effect under which better OoD performance is more probable.
arXiv Detail & Related papers (2022-09-17T17:46:06Z) - Low-bit Shift Network for End-to-End Spoken Language Understanding [7.851607739211987]
We propose the use of power-of-two quantization, which quantizes continuous parameters into low-bit power-of-two values.
This reduces computational complexity by removing expensive multiplication operations and with the use of low-bit weights.
arXiv Detail & Related papers (2022-07-15T14:34:22Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.