SOT-MRAM based Sigmoidal Neuron for Neuromorphic Architectures
- URL: http://arxiv.org/abs/2006.01238v1
- Date: Mon, 1 Jun 2020 20:18:14 GMT
- Title: SOT-MRAM based Sigmoidal Neuron for Neuromorphic Architectures
- Authors: Brendan Reidy and Ramtin Zand
- Abstract summary: In this paper, the intrinsic physical characteristics of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) devices are leveraged to realize sigmoidal neurons in neuromorphic architectures.
Performance comparisons with the previous power- and area-efficient sigmoidal neuron circuits exhibit 74x and 12x reduction in power-area-product values for the proposed SOT-MRAM based neuron.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, the intrinsic physical characteristics of spin-orbit torque
(SOT) magnetoresistive random-access memory (MRAM) devices are leveraged to
realize sigmoidal neurons in neuromorphic architectures. Performance
comparisons with the previous power- and area-efficient sigmoidal neuron
circuits exhibit 74x and 12x reduction in power-area-product values for the
proposed SOT-MRAM based neuron. To verify the functionally of the proposed
neuron within larger scale designs, we have implemented a circuit realization
of a 784x16x10 SOT-MRAM based multiplayer perceptron (MLP) for MNIST pattern
recognition application using SPICE circuit simulation tool. The results
obtained exhibit that the proposed SOT-MRAM based MLP can achieve accuracies
comparable to an ideal binarized MLP architecture implemented on GPU, while
realizing orders of magnitude increase in processing speed.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Neuromorphic Circuit Simulation with Memristors: Design and Evaluation Using MemTorch for MNIST and CIFAR [0.4077787659104315]
This study evaluates the feasibility of using memristors for in-memory processing by constructing and training three digital convolutional neural networks.
Conversion of these networks into memristive systems was performed using Memtorch.
The simulations, conducted under ideal conditions, revealed minimal precision losses of nearly 1% during inference.
arXiv Detail & Related papers (2024-07-18T11:30:33Z) - EPIM: Efficient Processing-In-Memory Accelerators based on Epitome [78.79382890789607]
We introduce the Epitome, a lightweight neural operator offering convolution-like functionality.
On the software side, we evaluate epitomes' latency and energy on PIM accelerators.
We introduce a PIM-aware layer-wise design method to enhance their hardware efficiency.
arXiv Detail & Related papers (2023-11-12T17:56:39Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - MRAM-based Analog Sigmoid Function for In-memory Computing [0.0]
We propose an analog implementation of the transcendental activation function leveraging two spin-orbit torque magnetoresistive random-access memory (SOT-MRAM) devices and a CMOS inverter.
The proposed analog neuron circuit consumes 1.8-27x less power, and occupies 2.5-4931x smaller area, compared to the state-of-the-art analog and digital implementations.
arXiv Detail & Related papers (2022-04-21T07:13:54Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - An In-Memory Analog Computing Co-Processor for Energy-Efficient CNN
Inference on Mobile Devices [4.117012092777604]
We develop an in-memory analog computing (IMAC) architecture realizing both synaptic behavior and activation functions within non-volatile memory arrays.
Spin-orbit torque magnetoresistive random-access memory (SOT-MRAM) devices are leveraged to realize sigmoidal neurons as well as binarized synapses.
A heterogeneous mixed-signal and mixed-precision CPU-IMAC architecture is proposed for convolutional neural networks (CNNs) inference on mobile processors.
arXiv Detail & Related papers (2021-05-24T23:01:36Z) - A Single-Cycle MLP Classifier Using Analog MRAM-based Neurons and
Synapses [0.0]
MRAM devices are leveraged to realize sigmoidal neurons and binarized synapses for a single-cycle analog in-memory computing architecture.
An analog SOT-MRAM-based neuron bitcell is proposed which achieves a 12x reduction in power-area-product.
An analog IMC architecture achieves at least two and four orders of magnitude performance improvement compared to a mixed-signal analog/digital IMC architecture.
arXiv Detail & Related papers (2020-12-04T16:04:32Z) - Modular Simulation Framework for Process Variation Analysis of
MRAM-based Deep Belief Networks [2.0222827433041535]
Magnetic Random-Access Memory (MRAM) based p-bit neuromorphic computing devices are garnering increasing interest as a means to compactly and efficiently realize machine learning operations in machines Boltzmann Machines (RBMs)
Restrictedity of activation is dependent on the energy barrier of the MRAM device, and it is essential to assess the impact of process variation on the voltage-dependent behavior of the sigmoid function.
Here, transportable Python scripts are developed to analyze the output variation under changes in device dimensions on the accuracy of machine learning applications.
arXiv Detail & Related papers (2020-02-03T17:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.