Simulation platform for pattern recognition based on reservoir computing
with memristor networks
- URL: http://arxiv.org/abs/2112.00248v2
- Date: Sun, 19 Jun 2022 00:45:33 GMT
- Title: Simulation platform for pattern recognition based on reservoir computing
with memristor networks
- Authors: Gouhei Tanaka and Ryosho Nakane
- Abstract summary: We develop a simulation platform for reservoir computing (RC) with memristor device networks.
We show that the memristor-network-based RC systems can yield high computational performance comparable to that of state-of-the-art methods in three time series classification tasks.
- Score: 1.5664378826358722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Memristive systems and devices are potentially available for implementing
reservoir computing (RC) systems applied to pattern recognition. However, the
computational ability of memristive RC systems depends on intertwined factors
such as system architectures and physical properties of memristive elements,
which complicates identifying the key factor for system performance. Here we
develop a simulation platform for RC with memristor device networks, which
enables testing different system designs for performance improvement. Numerical
simulations show that the memristor-network-based RC systems can yield high
computational performance comparable to that of state-of-the-art methods in
three time series classification tasks. We demonstrate that the excellent and
robust computation under device-to-device variability can be achieved by
appropriately setting network structures, nonlinearity of memristors, and
pre/post-processing, which increases the potential for reliable computation
with unreliable component devices. Our results contribute to an establishment
of a design guide for memristive reservoirs toward a realization of
energy-efficient machine learning hardware.
Related papers
- KALAM: toolKit for Automating high-Level synthesis of Analog computing systeMs [5.090653251547252]
This paper introduces KALAM, which leverages factor graphs as the foundational paradigm for MP-based analog computing systems.
Using Python scripting language, the KALAM automation flow translates an input factor graph to its equivalent SPICE-compatible circuit netlist.
We demonstrate KALAM's versatility for tasks such as Bayesian inference, Low-Density Parity Check (LDPC) decoding, and Artificial Neural Networks (ANN)
arXiv Detail & Related papers (2024-10-30T12:04:22Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Ensemble Method for System Failure Detection Using Large-Scale Telemetry Data [0.0]
This research paper presents an in-depth analysis of extensive system telemetry data, proposing an ensemble methodology for detecting system failures.
The proposed ensemble technique integrates a diverse set of algorithms, including Long Short-Term Memory (LSTM) networks, isolation forests, one-class support vector machines (OCSVM), and local outlier factors (LOF)
Experimental evaluations demonstrate the remarkable efficacy of our models, achieving a notable detection rate in identifying system failures.
arXiv Detail & Related papers (2024-06-07T06:35:17Z) - Physical Reservoir Computing Enabled by Solitary Waves and
Biologically-Inspired Nonlinear Transformation of Input Data [0.0]
Reservoir computing (RC) systems can efficiently forecast chaotic time series using nonlinear dynamical properties of an artificial neural network of random connections.
Inspired by the nonlinear processes in a living biological brain, in this paper we experimentally validate a physical RC system that substitutes the effect of randomness for a nonlinear transformation of input data.
arXiv Detail & Related papers (2024-01-03T06:22:36Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - PCBDet: An Efficient Deep Neural Network Object Detection Architecture
for Automatic PCB Component Detection on the Edge [48.7576911714538]
PCBDet is an attention condenser network design that provides state-of-the-art inference throughput.
It achieves superior PCB component detection performance compared to other state-of-the-art efficient architecture designs.
arXiv Detail & Related papers (2023-01-23T04:34:25Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - The Computational Capacity of LRC, Memristive and Hybrid Reservoirs [1.657441317977376]
Reservoir computing is a machine learning paradigm that uses a high-dimensional dynamical system, or emphreservoir, to approximate and predict time series data.
We analyze the feasibility and optimal design of electronic reservoirs that include both linear elements (resistors, inductors, and capacitors) and nonlinear memory elements called memristors.
Our electronic reservoirs can match or exceed the performance of conventional "echo state network" reservoirs in a form that may be directly implemented in hardware.
arXiv Detail & Related papers (2020-08-31T21:24:45Z) - Near-Optimal Hardware Design for Convolutional Neural Networks [0.0]
This study proposes a novel, special-purpose, and high-efficiency hardware architecture for convolutional neural networks.
The proposed architecture maximizes the utilization of multipliers by designing the computational circuit with the same structure as that of the computational flow of the model.
An implementation based on the proposed hardware architecture has been applied in commercial AI products.
arXiv Detail & Related papers (2020-02-06T09:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.