An Investigation into Neuromorphic ICs using Memristor-CMOS Hybrid
Circuits
- URL: http://arxiv.org/abs/2210.15593v1
- Date: Fri, 19 Aug 2022 18:04:03 GMT
- Title: An Investigation into Neuromorphic ICs using Memristor-CMOS Hybrid
Circuits
- Authors: Udit Kumar Agarwal, Shikhar Makhija, Varun Tripathi and Kunwar Singh
- Abstract summary: CMOS-Memristor based neural network accelerators provide a method of speeding up neural networks.
Various memristor programming circuits and basic neuromorphic circuits have been simulated.
The next phase of our project revolved around designing basic building blocks which can be used to design neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The memristance of a memristor depends on the amount of charge flowing
through it and when current stops flowing through it, it remembers the state.
Thus, memristors are extremely suited for implementation of memory units.
Memristors find great application in neuromorphic circuits as it is possible to
couple memory and processing, compared to traditional Von-Neumann digital
architectures where memory and processing are separate. Neural networks have a
layered structure where information passes from one layer to another and each
of these layers have the possibility of a high degree of parallelism.
CMOS-Memristor based neural network accelerators provide a method of speeding
up neural networks by making use of this parallelism and analog computation. In
this project we have conducted an initial investigation into the current state
of the art implementation of memristor based programming circuits. Various
memristor programming circuits and basic neuromorphic circuits have been
simulated. The next phase of our project revolved around designing basic
building blocks which can be used to design neural networks. A memristor bridge
based synaptic weighting block, a operational transconductor based summing
block were initially designed. We then designed activation function blocks
which are used to introduce controlled non-linearity. Blocks for a basic
rectified linear unit and a novel implementation for tan-hyperbolic function
have been proposed. An artificial neural network has been designed using these
blocks to validate and test their performance. We have also used these
fundamental blocks to design basic layers of Convolutional Neural Networks.
Convolutional Neural Networks are heavily used in image processing
applications. The core convolutional block has been designed and it has been
used as an image processing kernel to test its performance.
Related papers
- Lipschitz constant estimation for general neural network architectures using control tools [0.05120567378386613]
This paper is devoted to the estimation of the Lipschitz constant of neural networks using semidefinite programming.
For this purpose, we interpret neural networks as time-varying dynamical systems, where the $k$-th layer corresponds to the dynamics at time $k$.
arXiv Detail & Related papers (2024-05-02T09:38:16Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Construction of a spike-based memory using neural-like logic gates based
on Spiking Neural Networks on SpiNNaker [0.0]
This work presents a spiking implementation of a memory, which is one of the most important components in the computer architecture.
The tests were carried out on the SpiNNaker neuromorphic platform and allow to validate the approach used for the construction of the presented blocks.
arXiv Detail & Related papers (2022-06-08T15:22:41Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - Monolithic Silicon Photonic Architecture for Training Deep Neural
Networks with Direct Feedback Alignment [0.6501025489527172]
We propose on-chip training of neural networks enabled by a CMOS-compatible silicon photonic architecture.
Our scheme employs the direct feedback alignment training algorithm, which trains neural networks using error feedback rather than error backpropagation.
We experimentally demonstrate training a deep neural network with the MNIST dataset using on-chip MAC operation results.
arXiv Detail & Related papers (2021-11-12T18:31:51Z) - On-Chip Error-triggered Learning of Multi-layer Memristive Spiking
Neural Networks [1.7958576850695402]
We propose a local, gradient-based, error-triggered learning algorithm with online ternary weight updates.
The proposed algorithm enables online training of multi-layer SNNs with memristive neuromorphic hardware.
arXiv Detail & Related papers (2020-11-21T19:44:19Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Exposing Hardware Building Blocks to Machine Learning Frameworks [4.56877715768796]
We focus on how to design topologies that complement such a view of neurons as unique functions.
We develop a library that supports training a neural network with custom sparsity and quantization.
arXiv Detail & Related papers (2020-04-10T14:26:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.