Time-Series Forecasting and Sequence Learning Using Memristor-based Reservoir System
- URL: http://arxiv.org/abs/2405.13347v2
- Date: Sun, 15 Sep 2024 15:10:57 GMT
- Title: Time-Series Forecasting and Sequence Learning Using Memristor-based Reservoir System
- Authors: Abdullah M. Zyarah, Dhireesha Kudithipudi,
- Abstract summary: We develop a memristor-based echo state network accelerator that features efficient temporal data processing and in-situ online learning.
The proposed design is benchmarked using various datasets involving real-world tasks, such as forecasting the load energy consumption and weather conditions.
It is observed that the system demonstrates reasonable robustness for device failure below 10%, which may occur due to stuck-at faults.
- Score: 2.6473021051027534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pushing the frontiers of time-series information processing in the ever-growing domain of edge devices with stringent resources has been impeded by the systems' ability to process information and learn locally on the device. Local processing and learning of time-series information typically demand intensive computations and massive storage as the process involves retrieving information and tuning hundreds of parameters back in time. In this work, we developed a memristor-based echo state network accelerator that features efficient temporal data processing and in-situ online learning. The proposed design is benchmarked using various datasets involving real-world tasks, such as forecasting the load energy consumption and weather conditions. The experimental results illustrate that the hardware model experiences a marginal degradation in performance as compared to the software counterpart. This is mainly attributed to the limited precision and dynamic range of network parameters when emulated using memristor devices. The proposed system is evaluated for lifespan, robustness, and energy-delay product. It is observed that the system demonstrates reasonable robustness for device failure below 10%, which may occur due to stuck-at faults. Furthermore, 247X reduction in energy consumption is achieved when compared to a custom CMOS digital design implemented at the same technology node.
Related papers
- Oscillations enhance time-series prediction in reservoir computing with feedback [3.3686252536891454]
Reservoir computing is a machine learning framework used for modeling the brain.
It is difficult to accurately reproduce the long-term target time series because the reservoir system becomes unstable.
This study proposes oscillation-driven reservoir computing (ODRC) with feedback.
arXiv Detail & Related papers (2024-06-05T02:30:29Z) - Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification [0.6291443816903801]
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing signals.
Key component in RC hardware is the ability to generate dynamic reservoir states.
This study illuminates the adeptness of memristor-based RC systems in managing novel temporal challenges.
arXiv Detail & Related papers (2024-03-04T08:22:29Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Quantum reservoir computing with repeated measurements on
superconducting devices [6.868186896932376]
We develop a quantum reservoir (QR) system that exploits repeated measurement to generate a time-series.
We experimentally implement the proposed QRC on the IBM's quantum superconducting device and show that it achieves higher accuracy as well as shorter execution time.
arXiv Detail & Related papers (2023-10-10T15:29:24Z) - Evaluating Short-Term Forecasting of Multiple Time Series in IoT
Environments [67.24598072875744]
Internet of Things (IoT) environments are monitored via a large number of IoT enabled sensing devices.
To alleviate this issue, sensors are often configured to operate at relatively low sampling frequencies.
This can hamper dramatically subsequent decision-making, such as forecasting.
arXiv Detail & Related papers (2022-06-15T19:46:59Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Self-timed Reinforcement Learning using Tsetlin Machine [1.104960878651584]
We present a hardware design for the learning datapath of the Tsetlin machine algorithm, along with a latency analysis of the inference datapath.
Results illustrate the advantages of asynchronous design in applications such as personalized healthcare and battery-powered internet of things devices.
arXiv Detail & Related papers (2021-09-02T11:24:23Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - End-to-End Memristive HTM System for Pattern Recognition and Sequence
Prediction [4.932130498861988]
A neuromorphic system that processes-temporal information on the edge is proposed.
The proposed architecture is benchmarked to predict on real-world streaming data.
System offers 3.46X reduction in latency and 77.02X reduction in power consumption.
arXiv Detail & Related papers (2020-06-22T01:12:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.