Biologically Plausible Learning on Neuromorphic Hardware Architectures
- URL: http://arxiv.org/abs/2212.14337v2
- Date: Tue, 11 Apr 2023 10:50:10 GMT
- Title: Biologically Plausible Learning on Neuromorphic Hardware Architectures
- Authors: Christopher Wolters, Brady Taylor, Edward Hanson, Xiaoxuan Yang, Ulf
Schlichtmann and Yiran Chen
- Abstract summary: Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
- Score: 27.138481022472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With an ever-growing number of parameters defining increasingly complex
networks, Deep Learning has led to several breakthroughs surpassing human
performance. As a result, data movement for these millions of model parameters
causes a growing imbalance known as the memory wall. Neuromorphic computing is
an emerging paradigm that confronts this imbalance by performing computations
directly in analog memories. On the software side, the sequential
Backpropagation algorithm prevents efficient parallelization and thus fast
convergence. A novel method, Direct Feedback Alignment, resolves inherent layer
dependencies by directly passing the error from the output to each layer. At
the intersection of hardware/software co-design, there is a demand for
developing algorithms that are tolerable to hardware nonidealities. Therefore,
this work explores the interrelationship of implementing bio-plausible learning
in-situ on neuromorphic hardware, emphasizing energy, area, and latency
constraints. Using the benchmarking framework DNN+NeuroSim, we investigate the
impact of hardware nonidealities and quantization on algorithm performance, as
well as how network topologies and algorithm-level design choices can scale
latency, energy and area consumption of a chip. To the best of our knowledge,
this work is the first to compare the impact of different learning algorithms
on Compute-In-Memory-based hardware and vice versa. The best results achieved
for accuracy remain Backpropagation-based, notably when facing hardware
imperfections. Direct Feedback Alignment, on the other hand, allows for
significant speedup due to parallelization, reducing training time by a factor
approaching N for N-layered networks.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Quantization of Deep Neural Networks to facilitate self-correction of
weights on Phase Change Memory-based analog hardware [0.0]
We develop an algorithm to approximate a set of multiplicative weights.
These weights aim to represent the original network's weights with minimal loss in performance.
Our results demonstrate that, when paired with an on-chip pulse generator, our self-correcting neural network performs comparably to those trained with analog-aware algorithms.
arXiv Detail & Related papers (2023-09-30T10:47:25Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - From DNNs to GANs: Review of efficient hardware architectures for deep
learning [0.0]
Neural network and deep learning has been started to impact the present research paradigm.
DSP processors are incapable of performing neural network, activation function, convolutional neural network and generative adversarial network operations.
Different algorithms have been adapted to design a DSP processor compatible for fast performance in neural network, activation function, convolutional neural network and generative adversarial network.
arXiv Detail & Related papers (2021-06-06T13:23:06Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Hard-ODT: Hardware-Friendly Online Decision Tree Learning Algorithm and
System [17.55491405857204]
In the era of big data, traditional decision tree induction algorithms are not suitable for learning large-scale datasets.
We introduce a new quantile-based algorithm to improve the induction of the Hoeffding tree, one of the state-of-the-art online learning models.
We present Hard-ODT, a high-performance, hardware-efficient and scalable online decision tree learning system on a field-programmable gate array (FPGA) with system-level optimization techniques.
arXiv Detail & Related papers (2020-12-11T12:06:44Z) - Robust error bounds for quantised and pruned neural networks [1.8083503268672914]
Machine learning algorithms are moving towards decentralisation with the data and algorithms stored, and even trained, locally on devices.
The device hardware becomes the main bottleneck for model capability in this set-up, creating a need for slimmed down, more efficient neural networks.
A semi-definite program is introduced to bound the worst-case error caused by pruning or quantising a neural network.
It is hoped that the computed bounds will provide certainty to the performance of these algorithms when deployed on safety-critical systems.
arXiv Detail & Related papers (2020-11-30T22:19:44Z) - Surrogate gradients for analog neuromorphic computing [2.6475944316982942]
We show that learning self-corrects for device mismatch resulting in competitive spiking network performance on vision and speech benchmarks.
Our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware.
arXiv Detail & Related papers (2020-06-12T14:45:12Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.