Interpretable Design of Reservoir Computing Networks using Realization
Theory
- URL: http://arxiv.org/abs/2112.06891v1
- Date: Mon, 13 Dec 2021 18:49:29 GMT
- Title: Interpretable Design of Reservoir Computing Networks using Realization
Theory
- Authors: Wei Miao, Vignesh Narayanan, Jr-Shin Li
- Abstract summary: Reservoir computing networks (RCNs) have been successfully employed as a tool in learning and complex decision-making tasks.
We develop an algorithm to design RCNs using the realization theory of linear dynamical systems.
- Score: 5.607676459156789
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The reservoir computing networks (RCNs) have been successfully employed as a
tool in learning and complex decision-making tasks. Despite their efficiency
and low training cost, practical applications of RCNs rely heavily on empirical
design. In this paper, we develop an algorithm to design RCNs using the
realization theory of linear dynamical systems. In particular, we introduce the
notion of $\alpha$-stable realization, and provide an efficient approach to
prune the size of a linear RCN without deteriorating the training accuracy.
Furthermore, we derive a necessary and sufficient condition on the
irreducibility of number of hidden nodes in linear RCNs based on the concepts
of controllability and observability matrices. Leveraging the linear RCN
design, we provide a tractable procedure to realize RCNs with nonlinear
activation functions. Finally, we present numerical experiments on forecasting
time-delay systems and chaotic systems to validate the proposed RCN design
methods and demonstrate their efficacy.
Related papers
- Recurrent Stochastic Configuration Networks with Incremental Blocks [0.0]
Recurrent configuration networks (RSCNs) have shown promise in modelling nonlinear dynamic systems with order uncertainty.
This paper develops the original RSCNs with block increments, termed block RSCNs (BRSCNs)
BRSCNs can simultaneously add multiple reservoir nodes (subreservoirs) during the construction.
arXiv Detail & Related papers (2024-11-18T05:58:47Z) - Deeper Insights into Learning Performance of Stochastic Configuration Networks [3.8719670789415925]
Configuration Networks (SCNs) are a class of randomized neural networks that integrate randomized algorithms within an incremental learning framework.
We present a comprehensive analysis of the impact of the supervisory mechanism on the learning performance of SCNs.
We propose a novel method for evaluating the hidden layer's output matrix, supported by a new supervisory mechanism.
arXiv Detail & Related papers (2024-11-13T11:45:39Z) - Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective [125.00228936051657]
We introduce NTK-CL, a novel framework that eliminates task-specific parameter storage while adaptively generating task-relevant features.
By fine-tuning optimizable parameters with appropriate regularization, NTK-CL achieves state-of-the-art performance on established PEFT-CL benchmarks.
arXiv Detail & Related papers (2024-07-24T09:30:04Z) - Universal Approximation of Linear Time-Invariant (LTI) Systems through RNNs: Power of Randomness in Reservoir Computing [19.995241682744567]
Reservoir computing (RC) is a special RNN where the recurrent weights are randomized and left untrained.
We show that RC can universally approximate a general linear time-invariant (LTI) system.
arXiv Detail & Related papers (2023-08-04T17:04:13Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - Nonlinear MPC for Offset-Free Tracking of systems learned by GRU Neural
Networks [0.2578242050187029]
This paper describes how stable Gated Recurrent Units (GRUs) can be trained and employed in a MPC framework to perform offset-free tracking of constant references with guaranteed closed-loop stability.
The proposed approach is tested on a pH neutralization process benchmark, showing remarkable performances.
arXiv Detail & Related papers (2021-03-03T13:14:33Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.