Scalable Superconductor Neuron with Ternary Synaptic Connections for
Ultra-Fast SNN Hardware
- URL: http://arxiv.org/abs/2402.16384v2
- Date: Tue, 27 Feb 2024 07:06:00 GMT
- Title: Scalable Superconductor Neuron with Ternary Synaptic Connections for
Ultra-Fast SNN Hardware
- Authors: Mustafa Altay Karamuftuoglu, Beyza Zeynep Ucpinar, Arash Fayyazi,
Sasan Razmkhah, Mehdi Kamal, Massoud Pedram
- Abstract summary: A novel high-fan-in differential superconductor neuron structure is designed for ultra-high-performance Spiking Neural Network (SNN) accelerators.
The proposed neuron design is based on superconductor electronics fabric, incorporating multiple superconducting loops, each with two Josephson Junctions.
The network exhibits a remarkable throughput of 8.92 GHz while consuming only 1.5 nJ per inference, including the energy consumption associated with cooling to 4K.
- Score: 4.216765320139095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel high-fan-in differential superconductor neuron structure designed for
ultra-high-performance Spiking Neural Network (SNN) accelerators is presented.
Utilizing a high-fan-in neuron structure allows us to design SNN accelerators
with more synaptic connections, enhancing the overall network capabilities. The
proposed neuron design is based on superconductor electronics fabric,
incorporating multiple superconducting loops, each with two Josephson
Junctions. This arrangement enables each input data branch to have positive and
negative inductive coupling, supporting excitatory and inhibitory synaptic
data. Compatibility with synaptic devices and thresholding operation is
achieved using a single flux quantum (SFQ) pulse-based logic style. The neuron
design, along with ternary synaptic connections, forms the foundation for a
superconductor-based SNN inference. To demonstrate the capabilities of our
design, we train the SNN using snnTorch, augmenting the PyTorch framework.
After pruning, the demonstrated SNN inference achieves an impressive 96.1%
accuracy on MNIST images. Notably, the network exhibits a remarkable throughput
of 8.92 GHz while consuming only 1.5 nJ per inference, including the energy
consumption associated with cooling to 4K. These results underscore the
potential of superconductor electronics in developing high-performance and
ultra-energy-efficient neural network accelerator architectures.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Spiker+: a framework for the generation of efficient Spiking Neural
Networks FPGA accelerators for inference at the edge [49.42371633618761]
Spiker+ is a framework for generating efficient, low-power, and low-area customized Spiking Neural Networks (SNN) accelerators on FPGA for inference at the edge.
Spiker+ is tested on two benchmark datasets, the MNIST and the Spiking Heidelberg Digits (SHD)
arXiv Detail & Related papers (2024-01-02T10:42:42Z) - Deep Pulse-Coupled Neural Networks [31.65350290424234]
Neural Networks (SNNs) capture the information processing mechanism of the brain by taking advantage of neurons.
In this work, we leverage a more biologically plausible neural model with complex dynamics, i.e., a pulse-coupled neural network (PCNN)
We construct deep pulse-coupled neural networks (DPCNNs) by replacing commonly used LIF neurons in SNNs with PCNN neurons.
arXiv Detail & Related papers (2023-12-24T08:26:00Z) - Hybrid Synaptic Structure for Spiking Neural Network Realization [0.0]
This paper introduces a compact SFQ-based synapse design that applies positive and negative weighted inputs to the JJ-Soma.
The JJ-Synapse can operate at ultra-high frequencies, exhibits orders of magnitude lower power consumption than CMOS counterparts, and can be conveniently fabricated using commercial Nb processes.
arXiv Detail & Related papers (2023-11-13T22:42:07Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - On-Sensor Data Filtering using Neuromorphic Computing for High Energy
Physics Experiments [1.554920942634392]
We present our approach for developing a compact neuromorphic model that filters out the sensor data based on the particle's transverse momentum.
The incoming charge waveforms are converted to streams of binary-valued events, which are then processed by the SNN.
arXiv Detail & Related papers (2023-07-20T21:25:25Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Resonant tunnelling diode nano-optoelectronic spiking nodes for
neuromorphic information processing [0.0]
We introduce an optoelectronic artificial neuron capable of operating at ultrafast rates and with low energy consumption.
The proposed system combines an excitable tunnelling diode (RTD) element with a nanoscale light source.
arXiv Detail & Related papers (2021-07-14T14:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.