Master memory function for delay-based reservoir computers with
single-variable dynamics
- URL: http://arxiv.org/abs/2108.12643v1
- Date: Sat, 28 Aug 2021 13:17:24 GMT
- Title: Master memory function for delay-based reservoir computers with
single-variable dynamics
- Authors: Felix K\"oster, Serhiy Yanchuk, Kathy L\"udge
- Abstract summary: We show that many delay-based reservoir computers can be characterized by a universal master memory function (MMF)
Once computed for two independent parameters, this function provides linear memory capacity for any delay-based single-variable reservoir with small inputs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that many delay-based reservoir computers considered in the
literature can be characterized by a universal master memory function (MMF).
Once computed for two independent parameters, this function provides linear
memory capacity for any delay-based single-variable reservoir with small
inputs. Moreover, we propose an analytical description of the MMF that enables
its efficient and fast computation.
Our approach can be applied not only to reservoirs governed by known
dynamical rules such as Mackey-Glass or Ikeda-like systems but also to
reservoirs whose dynamical model is not available. We also present results
comparing the performance of the reservoir computer and the memory capacity
given by the MMF.
Related papers
- Reservoir Computing for Fast, Simplified Reinforcement Learning on Memory Tasks [0.0]
Reservoir computing greatly simplifies and speeds up reinforcement learning on memory tasks.
In particular, these findings offer significant benefit to meta-learning that depends primarily on efficient and highly general memory systems.
arXiv Detail & Related papers (2024-12-17T17:02:06Z) - CSR:Achieving 1 Bit Key-Value Cache via Sparse Representation [63.65323577445951]
We propose a novel approach called Cache Sparse Representation (CSR)
CSR transforms the dense Key-Value cache tensor into sparse indexes and weights, offering a more memory-efficient representation during LLM inference.
Our experiments demonstrate CSR achieves performance comparable to state-of-the-art KV cache quantization algorithms.
arXiv Detail & Related papers (2024-12-16T13:01:53Z) - Memory Layers at Scale [67.00854080570979]
This work takes memory layers beyond proof-of-concept, proving their utility at contemporary scale.
On downstream tasks, language models augmented with our improved memory layer outperform dense models with more than twice the budget, as well as mixture-of-expert models when matched for both compute and parameters.
We provide a fully parallelizable memory layer implementation, demonstrating scaling laws with up to 128B memory parameters, pretrained to 1 trillion tokens, comparing to base models with up to 8B parameters.
arXiv Detail & Related papers (2024-12-12T23:56:57Z) - B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory [91.81390121042192]
We develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an composable module.
B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens.
arXiv Detail & Related papers (2024-07-08T18:41:01Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Reconfigurable Low-latency Memory System for Sparse Matricized Tensor
Times Khatri-Rao Product on FPGA [3.4870723728779565]
Sparse Matricized Times Khatri-Rao Product (MTTKRP) is one of the most expensive kernels in tensor computations.
This paper focuses on a multi-faceted memory system, which explores the spatial and temporal locality of the data structures of MTTKRP.
Our system shows 2x and 1.26x speedups compared with cache-only and DMA-only memory systems, respectively.
arXiv Detail & Related papers (2021-09-18T08:19:29Z) - Task Agnostic Metrics for Reservoir Computing [0.0]
Physical reservoir computing is a computational paradigm that enables temporal pattern recognition in physical matter.
The chosen dynamical system must have three desirable properties: non-linearity, complexity, and fading memory.
We show that, in general, systems with lower damping reach higher values in all three performance metrics.
arXiv Detail & Related papers (2021-08-03T13:58:11Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Limitations of the recall capabilities in delay based reservoir
computing systems [0.0]
We analyze the memory capacity of a delay based reservoir computer with a Hopf normal form as nonlinearity.
A possible physical realisation could be a laser with external cavity, for which the information is fed via electrical injection.
arXiv Detail & Related papers (2020-09-16T13:54:39Z) - Deep Time-Delay Reservoir Computing: Dynamics and Memory Capacity [0.0]
We present how the dynamical properties of a deep Ikeda-based reservoir are related to its memory capacity (MC)
We show how the MC is related to the systems distance to bifurcations or magnitude of the conditional Lyapunov exponents.
numerical simulations show resonances between clock cycle and delays of the layers in all degrees of the MC.
arXiv Detail & Related papers (2020-06-11T10:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.