Limitations of the recall capabilities in delay based reservoir
computing systems
- URL: http://arxiv.org/abs/2010.15562v1
- Date: Wed, 16 Sep 2020 13:54:39 GMT
- Title: Limitations of the recall capabilities in delay based reservoir
computing systems
- Authors: Felix K\"oster, Dominik Ehlert, Kathy L\"udge
- Abstract summary: We analyze the memory capacity of a delay based reservoir computer with a Hopf normal form as nonlinearity.
A possible physical realisation could be a laser with external cavity, for which the information is fed via electrical injection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyze the memory capacity of a delay based reservoir computer with a
Hopf normal form as nonlinearity and numerically compute the linear as well as
the higher order recall capabilities. A possible physical realisation could be
a laser with external cavity, for which the information is fed via electrical
injection. A task independent quantification of the computational capability of
the reservoir system is done via a complete orthonormal set of basis functions.
Our results suggest that even for constant readout dimension the total memory
capacity is dependent on the ratio between the information input period, also
called the clock cycle, and the time delay in the system. Optimal performance
is found for a time delay about 1.6 times the clock cycle
Related papers
- Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework [56.82432591933544]
Distributed gradient descent (SGD) has attracted considerable recent attention due to its potential for scaling computational resources, reducing training time, and helping protect user privacy in machine learning.
This paper presents the run time and staleness of distributed SGD based on delay differential equations (SDDEs) and the approximation of gradient arrivals.
It is interestingly shown that increasing the number of activated workers does not necessarily accelerate distributed SGD due to staleness.
arXiv Detail & Related papers (2024-06-17T02:56:55Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Optimization of a Hydrodynamic Computational Reservoir through Evolution [58.720142291102135]
We interface with a model of a hydrodynamic system, under development by a startup, as a computational reservoir.
We optimized the readout times and how inputs are mapped to the wave amplitude or frequency using an evolutionary search algorithm.
Applying evolutionary methods to this reservoir system substantially improved separability on an XNOR task, in comparison to implementations with hand-selected parameters.
arXiv Detail & Related papers (2023-04-20T19:15:02Z) - Hitless memory-reconfigurable photonic reservoir computing architecture [1.4479776639062198]
Reservoir computing is an analog bio-inspired computation model for efficiently processing time-dependent signals.
We propose a novel TDRC architecture based on an asymmetric Mach-Zehnder interferometer integrated in a resonant cavity.
We demonstrate this approach on the temporal bitwise XOR task and conclude that this way of memory capacity reconfiguration allows optimal performance to be achieved.
arXiv Detail & Related papers (2022-07-13T14:43:40Z) - Master memory function for delay-based reservoir computers with
single-variable dynamics [0.0]
We show that many delay-based reservoir computers can be characterized by a universal master memory function (MMF)
Once computed for two independent parameters, this function provides linear memory capacity for any delay-based single-variable reservoir with small inputs.
arXiv Detail & Related papers (2021-08-28T13:17:24Z) - Natural quantum reservoir computing for temporal information processing [4.785845498722406]
Reservoir computing is a temporal information processing system that exploits artificial or physical dissipative dynamics.
This paper proposes the use of real superconducting quantum computing devices as the reservoir, where the dissipative property is served by the natural noise added to the quantum bits.
arXiv Detail & Related papers (2021-07-13T01:58:57Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z) - Insight into Delay Based Reservoir Computing via Eigenvalue Analysis [0.0]
We show that any dynamical system used as a reservoir can be analyzed in this way.
Optimal performance is found for a system with the eigenvalues having real parts close to zero and off-resonant imaginary parts.
arXiv Detail & Related papers (2020-09-16T20:41:47Z) - The Computational Capacity of LRC, Memristive and Hybrid Reservoirs [1.657441317977376]
Reservoir computing is a machine learning paradigm that uses a high-dimensional dynamical system, or emphreservoir, to approximate and predict time series data.
We analyze the feasibility and optimal design of electronic reservoirs that include both linear elements (resistors, inductors, and capacitors) and nonlinear memory elements called memristors.
Our electronic reservoirs can match or exceed the performance of conventional "echo state network" reservoirs in a form that may be directly implemented in hardware.
arXiv Detail & Related papers (2020-08-31T21:24:45Z) - HiPPO: Recurrent Memory with Optimal Polynomial Projections [93.3537706398653]
We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto bases.
Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem.
This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale.
arXiv Detail & Related papers (2020-08-17T23:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.