Master memory function for delay-based reservoir computers with
single-variable dynamics
- URL: http://arxiv.org/abs/2108.12643v1
- Date: Sat, 28 Aug 2021 13:17:24 GMT
- Title: Master memory function for delay-based reservoir computers with
single-variable dynamics
- Authors: Felix K\"oster, Serhiy Yanchuk, Kathy L\"udge
- Abstract summary: We show that many delay-based reservoir computers can be characterized by a universal master memory function (MMF)
Once computed for two independent parameters, this function provides linear memory capacity for any delay-based single-variable reservoir with small inputs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that many delay-based reservoir computers considered in the
literature can be characterized by a universal master memory function (MMF).
Once computed for two independent parameters, this function provides linear
memory capacity for any delay-based single-variable reservoir with small
inputs. Moreover, we propose an analytical description of the MMF that enables
its efficient and fast computation.
Our approach can be applied not only to reservoirs governed by known
dynamical rules such as Mackey-Glass or Ikeda-like systems but also to
reservoirs whose dynamical model is not available. We also present results
comparing the performance of the reservoir computer and the memory capacity
given by the MMF.
Related papers
- B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory [91.81390121042192]
We develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an composable module.
B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens.
arXiv Detail & Related papers (2024-07-08T18:41:01Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Hitless memory-reconfigurable photonic reservoir computing architecture [1.4479776639062198]
Reservoir computing is an analog bio-inspired computation model for efficiently processing time-dependent signals.
We propose a novel TDRC architecture based on an asymmetric Mach-Zehnder interferometer integrated in a resonant cavity.
We demonstrate this approach on the temporal bitwise XOR task and conclude that this way of memory capacity reconfiguration allows optimal performance to be achieved.
arXiv Detail & Related papers (2022-07-13T14:43:40Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - Reconfigurable Low-latency Memory System for Sparse Matricized Tensor
Times Khatri-Rao Product on FPGA [3.4870723728779565]
Sparse Matricized Times Khatri-Rao Product (MTTKRP) is one of the most expensive kernels in tensor computations.
This paper focuses on a multi-faceted memory system, which explores the spatial and temporal locality of the data structures of MTTKRP.
Our system shows 2x and 1.26x speedups compared with cache-only and DMA-only memory systems, respectively.
arXiv Detail & Related papers (2021-09-18T08:19:29Z) - Task Agnostic Metrics for Reservoir Computing [0.0]
Physical reservoir computing is a computational paradigm that enables temporal pattern recognition in physical matter.
The chosen dynamical system must have three desirable properties: non-linearity, complexity, and fading memory.
We show that, in general, systems with lower damping reach higher values in all three performance metrics.
arXiv Detail & Related papers (2021-08-03T13:58:11Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Memformer: A Memory-Augmented Transformer for Sequence Modeling [55.780849185884996]
We present Memformer, an efficient neural network for sequence modeling.
Our model achieves linear time complexity and constant memory space complexity when processing long sequences.
arXiv Detail & Related papers (2020-10-14T09:03:36Z) - Limitations of the recall capabilities in delay based reservoir
computing systems [0.0]
We analyze the memory capacity of a delay based reservoir computer with a Hopf normal form as nonlinearity.
A possible physical realisation could be a laser with external cavity, for which the information is fed via electrical injection.
arXiv Detail & Related papers (2020-09-16T13:54:39Z) - Deep Time-Delay Reservoir Computing: Dynamics and Memory Capacity [0.0]
We present how the dynamical properties of a deep Ikeda-based reservoir are related to its memory capacity (MC)
We show how the MC is related to the systems distance to bifurcations or magnitude of the conditional Lyapunov exponents.
numerical simulations show resonances between clock cycle and delays of the layers in all degrees of the MC.
arXiv Detail & Related papers (2020-06-11T10:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.