Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification
- URL: http://arxiv.org/abs/2403.01827v2
- Date: Sat, 16 Mar 2024 15:43:04 GMT
- Title: Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification
- Authors: Ankur Singh, Sanghyeon Choi, Gunuk Wang, Maryaradhiya Daimari, Byung-Geun Lee,
- Abstract summary: Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing signals.
Key component in RC hardware is the ability to generate dynamic reservoir states.
This study illuminates the adeptness of memristor-based RC systems in managing novel temporal challenges.
- Score: 0.6291443816903801
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing spatiotemporal signals. Known for its temporal processing prowess, RC significantly lowers training costs compared to conventional recurrent neural networks. A key component in its hardware deployment is the ability to generate dynamic reservoir states. Our research introduces a novel dual-memory RC system, integrating a short-term memory via a WOx-based memristor, capable of achieving 16 distinct states encoded over 4 bits, and a long-term memory component using a TiOx-based memristor within the readout layer. We thoroughly examine both memristor types and leverage the RC system to process temporal data sets. The performance of the proposed RC system is validated through two benchmark tasks: isolated spoken digit recognition with incomplete inputs and Mackey-Glass time series prediction. The system delivered an impressive 98.84% accuracy in digit recognition and sustained a low normalized root mean square error (NRMSE) of 0.036 in the time series prediction task, underscoring its capability. This study illuminates the adeptness of memristor-based RC systems in managing intricate temporal challenges, laying the groundwork for further innovations in neuromorphic computing.
Related papers
- Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons [0.26963330643873434]
This paper introduces a spiking neural network (SNN) for a hardware-friendly physical reservoir computing (RC) on a complementary metal-oxide-semiconductor (CMOS) platform.
We demonstrate RC through short-term memory and exclusive OR tasks, and the spoken digit recognition task with an accuracy of 97.7%.
arXiv Detail & Related papers (2024-09-18T00:23:00Z) - Neuromorphic Split Computing with Wake-Up Radios: Architecture and Design via Digital Twinning [97.99077847606624]
This work proposes a novel architecture that integrates a wake-up radio mechanism within a split computing system consisting of remote, wirelessly connected, NPUs.
A key challenge in the design of a wake-up radio-based neuromorphic split computing system is the selection of thresholds for sensing, wake-up signal detection, and decision making.
arXiv Detail & Related papers (2024-04-02T10:19:04Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Selective Memory Recursive Least Squares: Recast Forgetting into Memory
in RBF Neural Network Based Real-Time Learning [2.31120983784623]
In radial basis function neural network (RBFNN) based real-time learning tasks, forgetting mechanisms are widely used.
This paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism.
With SMRLS, the input space of the RBFNN is evenly divided into a finite number of partitions and a synthesized objective function is developed using synthesized samples from each partition.
arXiv Detail & Related papers (2022-11-15T05:29:58Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Continual Spatio-Temporal Graph Convolutional Networks [87.86552250152872]
We reformulating the Spatio-Temporal Graph Convolutional Neural Network as a Continual Inference Network.
We observe up to 109x reduction in time complexity, on- hardware accelerations of 26x, and reductions in maximum allocated memory of 52% during online inference.
arXiv Detail & Related papers (2022-03-21T14:23:18Z) - Temporal Memory Relation Network for Workflow Recognition from Surgical
Video [53.20825496640025]
We propose a novel end-to-end temporal memory relation network (TMNet) for relating long-range and multi-scale temporal patterns.
We have extensively validated our approach on two benchmark surgical video datasets.
arXiv Detail & Related papers (2021-03-30T13:20:26Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Model-Size Reduction for Reservoir Computing by Concatenating Internal
States Through Time [2.6872737601772956]
Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly.
To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires.
We propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step.
arXiv Detail & Related papers (2020-06-11T06:11:03Z) - Sparsity in Reservoir Computing Neural Networks [3.55810827129032]
Reservoir Computing (RC) is a strategy for designing Recurrent Neural Networks featured by striking efficiency of training.
In this paper, we empirically investigate the role of sparsity in RC network design under the perspective of the richness of the developed temporal representations.
arXiv Detail & Related papers (2020-06-04T15:38:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.