Brain-Inspired Reservoir Computing Using Memristors with Tunable
Dynamics and Short-Term Plasticity
- URL: http://arxiv.org/abs/2310.16331v1
- Date: Wed, 25 Oct 2023 03:27:43 GMT
- Title: Brain-Inspired Reservoir Computing Using Memristors with Tunable
Dynamics and Short-Term Plasticity
- Authors: Nicholas X. Armendarez, Ahmed S. Mohamed, Anurag Dhungel, Md Razuan
Hossain, Md Sakib Hasan, Joseph S. Najem
- Abstract summary: We show that reservoir layers constructed with a small number of distinct memristors exhibit significantly higher predictive and classification accuracies with a single data encoding.
In a neural activity classification task, a reservoir of just three distinct memristors experimentally attained an accuracy of 96.5%.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in reservoir computing research have created a demand for
analog devices with dynamics that can facilitate the physical implementation of
reservoirs, promising faster information processing while consuming less energy
and occupying a smaller area footprint. Studies have demonstrated that dynamic
memristors, with nonlinear and short-term memory dynamics, are excellent
candidates as information-processing devices or reservoirs for temporal
classification and prediction tasks. Previous implementations relied on
nominally identical memristors that applied the same nonlinear transformation
to the input data, which is not enough to achieve a rich state space. To
address this limitation, researchers either diversified the data encoding
across multiple memristors or harnessed the stochastic device-to-device
variability among the memristors. However, this approach requires additional
pre-processing steps and leads to synchronization issues. Instead, it is
preferable to encode the data once and pass it through a reservoir layer
consisting of memristors with distinct dynamics. Here, we demonstrate that
ion-channel-based memristors with voltage-dependent dynamics can be
controllably and predictively tuned through voltage or adjustment of the ion
channel concentration to exhibit diverse dynamic properties. We show, through
experiments and simulations, that reservoir layers constructed with a small
number of distinct memristors exhibit significantly higher predictive and
classification accuracies with a single data encoding. We found that for a
second-order nonlinear dynamical system prediction task, the varied memristor
reservoir experimentally achieved a normalized mean square error of 0.0015
using only five distinct memristors. Moreover, in a neural activity
classification task, a reservoir of just three distinct memristors
experimentally attained an accuracy of 96.5%.
Related papers
- Neural Operator-Based Proxy for Reservoir Simulations Considering Varying Well Settings, Locations, and Permeability Fields [0.0]
We present a single Fourier Neural Operator (FNO) surrogate that outperforms traditional reservoir simulators.
The maximum-mean relative error of 95% of pressure and saturation predictions is less than 5%.
The model can accelerate history matching and reservoir characterization procedures by several orders of magnitude.
arXiv Detail & Related papers (2024-07-13T00:26:14Z) - Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching [56.286064975443026]
We make an interesting and somehow surprising observation: the computation of a large proportion of layers in the diffusion transformer, through a caching mechanism, can be readily removed even without updating the model parameters.
We introduce a novel scheme, named Learningto-Cache (L2C), that learns to conduct caching in a dynamic manner for diffusion transformers.
Experimental results show that L2C largely outperforms samplers such as DDIM and DPM-r, alongside prior cache-based methods at the same inference speed.
arXiv Detail & Related papers (2024-06-03T18:49:57Z) - Intrinsic Voltage Offsets in Memcapacitive Bio-Membranes Enable High-Performance Physical Reservoir Computing [0.0]
Reservoir computing is a brain-inspired machine learning framework for processing temporal data by mapping inputs into high-dimensional spaces.
Here, we introduce a novel memcapacitor-based PRC that exploits internal voltage offsets to enable both monotonic and non-monotonic input-state correlations.
Our approach and unprecedented performance are a major milestone towards high-performance full in-materia PRCs.
arXiv Detail & Related papers (2024-04-27T05:47:38Z) - Biomembrane-based Memcapacitive Reservoir Computing System for Energy
Efficient Temporal Data Processing [0.0]
Reservoir computing is a highly efficient machine learning framework for processing temporal data.
Here, we leverage volatile biomembrane-based memcapacitors that closely mimic certain short-term synaptic plasticity functions as reservoirs.
Our system achieves a 99.6% accuracy rate for spoken digit classification and a normalized mean square error of 7.81*10-4 in a second-order non-linear regression task.
arXiv Detail & Related papers (2023-05-19T22:36:20Z) - Optimization of a Hydrodynamic Computational Reservoir through Evolution [58.720142291102135]
We interface with a model of a hydrodynamic system, under development by a startup, as a computational reservoir.
We optimized the readout times and how inputs are mapped to the wave amplitude or frequency using an evolutionary search algorithm.
Applying evolutionary methods to this reservoir system substantially improved separability on an XNOR task, in comparison to implementations with hand-selected parameters.
arXiv Detail & Related papers (2023-04-20T19:15:02Z) - The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in
Transformers [59.87030906486969]
This paper studies the curious phenomenon for machine learning models with Transformer architectures that their activation maps are sparse.
We show that sparsity is a prevalent phenomenon that occurs for both natural language processing and vision tasks.
We discuss how sparsity immediately implies a way to significantly reduce the FLOP count and improve efficiency for Transformers.
arXiv Detail & Related papers (2022-10-12T15:25:19Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Task Agnostic Metrics for Reservoir Computing [0.0]
Physical reservoir computing is a computational paradigm that enables temporal pattern recognition in physical matter.
The chosen dynamical system must have three desirable properties: non-linearity, complexity, and fading memory.
We show that, in general, systems with lower damping reach higher values in all three performance metrics.
arXiv Detail & Related papers (2021-08-03T13:58:11Z) - Two-step penalised logistic regression for multi-omic data with an
application to cardiometabolic syndrome [62.997667081978825]
We implement a two-step approach to multi-omic logistic regression in which variable selection is performed on each layer separately.
Our approach should be preferred if the goal is to select as many relevant predictors as possible.
Our proposed approach allows us to identify features that characterise cardiometabolic syndrome at the molecular level.
arXiv Detail & Related papers (2020-08-01T10:36:27Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Combining data assimilation and machine learning to emulate a dynamical
model from sparse and noisy observations: a case study with the Lorenz 96
model [0.0]
The method consists in applying iteratively a data assimilation step, here an ensemble Kalman filter, and a neural network.
Data assimilation is used to optimally combine a surrogate model with sparse data.
The output analysis is spatially complete and is used as a training set by the neural network to update the surrogate model.
Numerical experiments have been carried out using the chaotic 40-variables Lorenz 96 model, proving both convergence and statistical skill of the proposed hybrid approach.
arXiv Detail & Related papers (2020-01-06T12:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.