Mem-elements based Neuromorphic Hardware for Neural Network Application
- URL: http://arxiv.org/abs/2403.03002v1
- Date: Tue, 5 Mar 2024 14:28:40 GMT
- Title: Mem-elements based Neuromorphic Hardware for Neural Network Application
- Authors: Ankur Singh
- Abstract summary: The thesis investigates the utilization of memristive and memcapacitive crossbar arrays in low-power machine learning accelerators, offering a comprehensive co-design framework for deep neural networks (DNN)
The model, implemented through a hybrid Python and PyTorch approach, accounts for various non-idealities, achieving exceptional training accuracies of 90.02% and 91.03% for the CIFAR-10 dataset with memristive and memcapacitive crossbar arrays on an 8-layer VGG network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The thesis investigates the utilization of memristive and memcapacitive
crossbar arrays in low-power machine learning accelerators, offering a
comprehensive co-design framework for deep neural networks (DNN). The model,
implemented through a hybrid Python and PyTorch approach, accounts for various
non-idealities, achieving exceptional training accuracies of 90.02% and 91.03%
for the CIFAR-10 dataset with memristive and memcapacitive crossbar arrays on
an 8-layer VGG network. Additionally, the thesis introduces a novel approach to
emulate meminductor devices using Operational Transconductance Amplifiers (OTA)
and capacitors, showcasing adjustable behavior. Transistor-level simulations in
180 nm CMOS technology, operating at 60 MHz, demonstrate the proposed
meminductor emulator's viability with a power consumption of 0.337 mW. The
design is further validated in neuromorphic circuits and CNN accelerators,
achieving training and testing accuracies of 91.04% and 88.82%, respectively.
Notably, the exclusive use of MOS transistors ensures the feasibility of
monolithic IC fabrication. This research significantly contributes to the
exploration of advanced hardware solutions for efficient and high-performance
machine-learning applications.
Related papers
- Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia.
Current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring.
We are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs.
arXiv Detail & Related papers (2024-09-06T17:48:44Z) - On-Chip Learning with Memristor-Based Neural Networks: Assessing Accuracy and Efficiency Under Device Variations, Conductance Errors, and Input Noise [0.0]
This paper presents a memristor-based compute-in-memory hardware accelerator for on-chip training and inference.
Hardware, consisting of 30 memristors and 4 neurons, utilizes three different M-SDC structures with tungsten, chromium, and carbon media to perform binary image classification tasks.
arXiv Detail & Related papers (2024-08-26T23:10:01Z) - Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors [0.8426358786287627]
In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
arXiv Detail & Related papers (2024-08-14T17:51:20Z) - Few-Shot Transfer Learning for Individualized Braking Intent Detection on Neuromorphic Hardware [0.21847754147782888]
This work explores use of a few-shot transfer learning method to train and implement a convolutional spiking neural network (CSNN) on a BrainChip.
Results show the energy-efficiency of the neuromorphic hardware through a power reduction of over 97% with only a $1.3* increase in latency.
arXiv Detail & Related papers (2024-07-21T15:35:46Z) - Neuromorphic Circuit Simulation with Memristors: Design and Evaluation Using MemTorch for MNIST and CIFAR [0.4077787659104315]
This study evaluates the feasibility of using memristors for in-memory processing by constructing and training three digital convolutional neural networks.
Conversion of these networks into memristive systems was performed using Memtorch.
The simulations, conducted under ideal conditions, revealed minimal precision losses of nearly 1% during inference.
arXiv Detail & Related papers (2024-07-18T11:30:33Z) - Stochastic Spiking Attention: Accelerating Attention with Stochastic
Computing in Spiking Networks [33.51445486269896]
Spiking Neural Networks (SNNs) have been recently integrated into Transformer architectures due to their potential to reduce computational demands and to improve power efficiency.
We propose a novel framework leveraging computing (SC) to effectively execute the dot-product attention for SNN-based Transformers.
arXiv Detail & Related papers (2024-02-14T11:47:19Z) - EKGNet: A 10.96{\mu}W Fully Analog Neural Network for Intra-Patient
Arrhythmia Classification [79.7946379395238]
We present an integrated approach by combining analog computing and deep learning for electrocardiogram (ECG) arrhythmia classification.
We propose EKGNet, a hardware-efficient and fully analog arrhythmia classification architecture that archives high accuracy with low power consumption.
arXiv Detail & Related papers (2023-10-24T02:37:49Z) - SupeRBNN: Randomized Binary Neural Network Using Adiabatic
Superconductor Josephson Devices [44.440915387556544]
AQFP devices serve as excellent carriers for binary neural network (BNN) computations.
We propose SupeRBNN, an AQFP-based randomized BNN acceleration framework.
We show that our design achieves an energy efficiency of approximately 7.8x104 times higher than that of the ReRAM-based BNN framework.
arXiv Detail & Related papers (2023-09-21T16:14:42Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning
with Spike-Based Retinas [1.4425878137951236]
We propose SPOON, a 28-nm event-driven CNN (eCNN) for adaptive edge computing and vision applications.
It embeds online learning with only 16.8-% power and 11.8-% area overheads with the biologically-plausible direct random target projection (DRTP) algorithm.
With an energy per classification of 313nJ at 0.6V and a 0.32-mm$2$ area for accuracies of 95.3% (on-chip training) and 97.5% (off-chip training) on MNIST, we demonstrate that SPOON reaches the efficiency of conventional machine learning accelerators.
arXiv Detail & Related papers (2020-05-13T13:47:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.