Mem-elements based Neuromorphic Hardware for Neural Network Application
- URL: http://arxiv.org/abs/2403.03002v1
- Date: Tue, 5 Mar 2024 14:28:40 GMT
- Title: Mem-elements based Neuromorphic Hardware for Neural Network Application
- Authors: Ankur Singh
- Abstract summary: The thesis investigates the utilization of memristive and memcapacitive crossbar arrays in low-power machine learning accelerators, offering a comprehensive co-design framework for deep neural networks (DNN)
The model, implemented through a hybrid Python and PyTorch approach, accounts for various non-idealities, achieving exceptional training accuracies of 90.02% and 91.03% for the CIFAR-10 dataset with memristive and memcapacitive crossbar arrays on an 8-layer VGG network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The thesis investigates the utilization of memristive and memcapacitive
crossbar arrays in low-power machine learning accelerators, offering a
comprehensive co-design framework for deep neural networks (DNN). The model,
implemented through a hybrid Python and PyTorch approach, accounts for various
non-idealities, achieving exceptional training accuracies of 90.02% and 91.03%
for the CIFAR-10 dataset with memristive and memcapacitive crossbar arrays on
an 8-layer VGG network. Additionally, the thesis introduces a novel approach to
emulate meminductor devices using Operational Transconductance Amplifiers (OTA)
and capacitors, showcasing adjustable behavior. Transistor-level simulations in
180 nm CMOS technology, operating at 60 MHz, demonstrate the proposed
meminductor emulator's viability with a power consumption of 0.337 mW. The
design is further validated in neuromorphic circuits and CNN accelerators,
achieving training and testing accuracies of 91.04% and 88.82%, respectively.
Notably, the exclusive use of MOS transistors ensures the feasibility of
monolithic IC fabrication. This research significantly contributes to the
exploration of advanced hardware solutions for efficient and high-performance
machine-learning applications.
Related papers
- Analysis of a Memcapacitor-Based for Neural Network Accelerator Framework [0.9421843976231371]
We introduce a novel CMOS-based memcapacitor circuit that is validated using the cadence tool.
We developed the device in Python to facilitate the design of a memcapacitive-based accelerator.
This study demonstrates the potential of memcapacitor-based neural network systems in handling classification tasks.
arXiv Detail & Related papers (2025-01-21T18:02:30Z) - Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing [53.77822620185878]
We propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs.
We develop "BayesMulti", a training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections.
Our integrated approach enables use of analog computing in much deeper and wider networks, achieving up to 100-fold improvements.
arXiv Detail & Related papers (2024-12-03T19:20:08Z) - A High Energy-Efficiency Multi-core Neuromorphic Architecture for Deep SNN Training [40.2426933591366]
We develop a multi-core neuromorphic architecture supporting the direct SNN training.
We obtain a high energy efficiency of 1.05TFLOPS/W@ FP16 @ 28nm, 55 85% reduction of DRAM access compared to A100 GPU in SNN trainings.
arXiv Detail & Related papers (2024-11-26T09:41:26Z) - Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia.
Current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring.
We are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs.
arXiv Detail & Related papers (2024-09-06T17:48:44Z) - On-Chip Learning with Memristor-Based Neural Networks: Assessing Accuracy and Efficiency Under Device Variations, Conductance Errors, and Input Noise [0.0]
This paper presents a memristor-based compute-in-memory hardware accelerator for on-chip training and inference.
Hardware, consisting of 30 memristors and 4 neurons, utilizes three different M-SDC structures with tungsten, chromium, and carbon media to perform binary image classification tasks.
arXiv Detail & Related papers (2024-08-26T23:10:01Z) - Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors [0.8426358786287627]
In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
arXiv Detail & Related papers (2024-08-14T17:51:20Z) - Neuromorphic Circuit Simulation with Memristors: Design and Evaluation Using MemTorch for MNIST and CIFAR [0.4077787659104315]
This study evaluates the feasibility of using memristors for in-memory processing by constructing and training three digital convolutional neural networks.
Conversion of these networks into memristive systems was performed using Memtorch.
The simulations, conducted under ideal conditions, revealed minimal precision losses of nearly 1% during inference.
arXiv Detail & Related papers (2024-07-18T11:30:33Z) - EKGNet: A 10.96{\mu}W Fully Analog Neural Network for Intra-Patient
Arrhythmia Classification [79.7946379395238]
We present an integrated approach by combining analog computing and deep learning for electrocardiogram (ECG) arrhythmia classification.
We propose EKGNet, a hardware-efficient and fully analog arrhythmia classification architecture that archives high accuracy with low power consumption.
arXiv Detail & Related papers (2023-10-24T02:37:49Z) - SupeRBNN: Randomized Binary Neural Network Using Adiabatic
Superconductor Josephson Devices [44.440915387556544]
AQFP devices serve as excellent carriers for binary neural network (BNN) computations.
We propose SupeRBNN, an AQFP-based randomized BNN acceleration framework.
We show that our design achieves an energy efficiency of approximately 7.8x104 times higher than that of the ReRAM-based BNN framework.
arXiv Detail & Related papers (2023-09-21T16:14:42Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning
with Spike-Based Retinas [1.4425878137951236]
We propose SPOON, a 28-nm event-driven CNN (eCNN) for adaptive edge computing and vision applications.
It embeds online learning with only 16.8-% power and 11.8-% area overheads with the biologically-plausible direct random target projection (DRTP) algorithm.
With an energy per classification of 313nJ at 0.6V and a 0.32-mm$2$ area for accuracies of 95.3% (on-chip training) and 97.5% (off-chip training) on MNIST, we demonstrate that SPOON reaches the efficiency of conventional machine learning accelerators.
arXiv Detail & Related papers (2020-05-13T13:47:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.