A Single-Cycle MLP Classifier Using Analog MRAM-based Neurons and
Synapses
- URL: http://arxiv.org/abs/2012.02695v1
- Date: Fri, 4 Dec 2020 16:04:32 GMT
- Title: A Single-Cycle MLP Classifier Using Analog MRAM-based Neurons and
Synapses
- Authors: Ramtin Zand
- Abstract summary: MRAM devices are leveraged to realize sigmoidal neurons and binarized synapses for a single-cycle analog in-memory computing architecture.
An analog SOT-MRAM-based neuron bitcell is proposed which achieves a 12x reduction in power-area-product.
An analog IMC architecture achieves at least two and four orders of magnitude performance improvement compared to a mixed-signal analog/digital IMC architecture.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, spin-orbit torque (SOT) magnetoresistive random-access memory
(MRAM) devices are leveraged to realize sigmoidal neurons and binarized
synapses for a single-cycle analog in-memory computing (IMC) architecture.
First, an analog SOT-MRAM-based neuron bitcell is proposed which achieves a 12x
reduction in power-area-product compared to the previous most power- and
area-efficient analog sigmoidal neuron design. Next, proposed neuron and
synapse bit cells are used within memory subarrays to form an analog IMC-based
multilayer perceptron (MLP) architecture for the MNIST pattern recognition
application. The architecture-level results exhibit that our analog IMC
architecture achieves at least two and four orders of magnitude performance
improvement compared to a mixed-signal analog/digital IMC architecture and a
digital GPU implementation, respectively while realizing a comparable
classification accuracy.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - EKGNet: A 10.96{\mu}W Fully Analog Neural Network for Intra-Patient
Arrhythmia Classification [79.7946379395238]
We present an integrated approach by combining analog computing and deep learning for electrocardiogram (ECG) arrhythmia classification.
We propose EKGNet, a hardware-efficient and fully analog arrhythmia classification architecture that archives high accuracy with low power consumption.
arXiv Detail & Related papers (2023-10-24T02:37:49Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - CIMulator: A Comprehensive Simulation Platform for Computing-In-Memory
Circuit Macros with Low Bit-Width and Real Memory Materials [0.5325753548715747]
This paper presents a simulation platform, namely CIMulator, for quantifying the efficacy of various synaptic devices in neuromorphic accelerators.
Non-volatile memory devices, such as resistive random-access memory, ferroelectric field-effect transistor, and volatile static random-access memory devices, can be selected as synaptic devices.
A multilayer perceptron and convolutional neural networks (CNNs), such as LeNet-5, VGG-16, and a custom CNN named C4W-1, are simulated to evaluate the effects of these synaptic devices on the training and inference outcomes.
arXiv Detail & Related papers (2023-06-26T12:36:07Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - MRAM-based Analog Sigmoid Function for In-memory Computing [0.0]
We propose an analog implementation of the transcendental activation function leveraging two spin-orbit torque magnetoresistive random-access memory (SOT-MRAM) devices and a CMOS inverter.
The proposed analog neuron circuit consumes 1.8-27x less power, and occupies 2.5-4931x smaller area, compared to the state-of-the-art analog and digital implementations.
arXiv Detail & Related papers (2022-04-21T07:13:54Z) - An In-Memory Analog Computing Co-Processor for Energy-Efficient CNN
Inference on Mobile Devices [4.117012092777604]
We develop an in-memory analog computing (IMAC) architecture realizing both synaptic behavior and activation functions within non-volatile memory arrays.
Spin-orbit torque magnetoresistive random-access memory (SOT-MRAM) devices are leveraged to realize sigmoidal neurons as well as binarized synapses.
A heterogeneous mixed-signal and mixed-precision CPU-IMAC architecture is proposed for convolutional neural networks (CNNs) inference on mobile processors.
arXiv Detail & Related papers (2021-05-24T23:01:36Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - SOT-MRAM based Sigmoidal Neuron for Neuromorphic Architectures [0.0]
In this paper, the intrinsic physical characteristics of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) devices are leveraged to realize sigmoidal neurons in neuromorphic architectures.
Performance comparisons with the previous power- and area-efficient sigmoidal neuron circuits exhibit 74x and 12x reduction in power-area-product values for the proposed SOT-MRAM based neuron.
arXiv Detail & Related papers (2020-06-01T20:18:14Z) - Accelerated Analog Neuromorphic Computing [0.0]
This paper presents the concepts behind the BrainScales (BSS) accelerated analog neuromorphic computing architecture.
It describes the second-generation BrainScales-2 (BSS-2) version and its most recent in-silico realization, the HICANN-X Application Specific Integrated Circuit (ASIC)
The presented architecture is based upon a continuous-time, analog, physical model implementation of neurons and synapses.
arXiv Detail & Related papers (2020-03-26T16:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.