Constraints on parameter choices for successful reservoir computing
- URL: http://arxiv.org/abs/2206.02575v1
- Date: Fri, 3 Jun 2022 12:10:48 GMT
- Title: Constraints on parameter choices for successful reservoir computing
- Authors: L. Storm, K. Gustavsson, B. Mehlig
- Abstract summary: We study what other conditions are necessary for successful time-series prediction.
We identify two key parameters for prediction performance, and conduct a parameter sweep to find regions where prediction is successful.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Echo-state networks are simple models of discrete dynamical systems driven by
a time series. By selecting network parameters such that the dynamics of the
network is contractive, characterized by a negative maximal Lyapunov exponent,
the network may synchronize with the driving signal. Exploiting this
synchronization, the echo-state network may be trained to autonomously
reproduce the input dynamics, enabling time-series prediction. However, while
synchronization is a necessary condition for prediction, it is not sufficient.
Here, we study what other conditions are necessary for successful time-series
prediction. We identify two key parameters for prediction performance, and
conduct a parameter sweep to find regions where prediction is successful. These
regions differ significantly depending on whether full or partial phase space
information about the input is provided to the network during training. We
explain how these regions emerge.
Related papers
- Neural Conformal Control for Time Series Forecasting [54.96087475179419]
We introduce a neural network conformal prediction method for time series that enhances adaptivity in non-stationary environments.
Our approach acts as a neural controller designed to achieve desired target coverage, leveraging auxiliary multi-view data with neural network encoders.
We empirically demonstrate significant improvements in coverage and probabilistic accuracy, and find that our method is the only one that combines good calibration with consistency in prediction intervals.
arXiv Detail & Related papers (2024-12-24T03:56:25Z) - Temporal Convolution Derived Multi-Layered Reservoir Computing [5.261277318790788]
We propose a new mapping of input data into the reservoir's state space.
We incorporate this method in two novel network architectures increasing parallelizability, depth and predictive capabilities of the neural network.
For the chaotic time series, we observe an error reduction of up to $85.45%$ compared to Echo State Networks and $90.72%$ compared to Gated Recurrent Units.
arXiv Detail & Related papers (2024-07-09T11:40:46Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Probabilistic Verification of ReLU Neural Networks via Characteristic
Functions [11.489187712465325]
We use ideas from probability theory in the frequency domain to provide probabilistic verification guarantees for ReLU neural networks.
We interpret a (deep) feedforward neural network as a discrete dynamical system over a finite horizon.
We obtain the corresponding cumulative distribution function of the output set, which can be used to check if the network is performing as expected.
arXiv Detail & Related papers (2022-12-03T05:53:57Z) - Time-to-Green predictions for fully-actuated signal control systems with
supervised learning [56.66331540599836]
This paper proposes a time series prediction framework using aggregated traffic signal and loop detector data.
We utilize state-of-the-art machine learning models to predict future signal phases' duration.
Results based on an empirical data set from a fully-actuated signal control system in Zurich, Switzerland, show that machine learning models outperform conventional prediction methods.
arXiv Detail & Related papers (2022-08-24T07:50:43Z) - Anticipating synchronization with machine learning [1.0958014189747356]
In applications of dynamical systems, it is desired to predict the onset of synchronization.
We develop a prediction framework that is model free and fully data driven.
We demonstrate the machine-learning based framework using representative chaotic models and small network systems.
arXiv Detail & Related papers (2021-03-13T03:51:48Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Link Prediction for Temporally Consistent Networks [6.981204218036187]
Link prediction estimates the next relationship in dynamic networks.
The use of adjacency matrix to represent dynamically evolving networks limits the ability to analytically learn from heterogeneous, sparse, or forming networks.
We propose a new method of canonically representing heterogeneous time-evolving activities as a temporally parameterized network model.
arXiv Detail & Related papers (2020-06-06T07:28:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.