An On-Chip Trainable Neuron Circuit for SFQ-Based Spiking Neural
Networks
- URL: http://arxiv.org/abs/2310.07824v1
- Date: Wed, 11 Oct 2023 19:04:33 GMT
- Title: An On-Chip Trainable Neuron Circuit for SFQ-Based Spiking Neural
Networks
- Authors: Beyza Zeynep Ucpinar, Mustafa Altay Karamuftuoglu, Sasan Razmkhah,
Massoud Pedram
- Abstract summary: We present an on-chip trainable neuron circuit for training spiking neural networks (SNN)
Our proposed circuit suits bio-inspired spike-based time-dependent data computation for training spiking neural networks (SNN)
The circuits are designed and optimized for the MIT LLQ5ee fabrication process.
- Score: 4.825037489691159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an on-chip trainable neuron circuit. Our proposed circuit suits
bio-inspired spike-based time-dependent data computation for training spiking
neural networks (SNN). The thresholds of neurons can be increased or decreased
depending on the desired application-specific spike generation rate. This
mechanism provides us with a flexible design and scalable circuit structure. We
demonstrate the trainable neuron structure under different operating scenarios.
The circuits are designed and optimized for the MIT LL SFQ5ee fabrication
process. Margin values for all parameters are above 25\% with a 3GHz throughput
for a 16-input neuron.
Related papers
- Language Model Circuits Are Sparse in the Neuron Basis [50.460651620833055]
We show that textbfMLP neurons are as sparse a feature basis as SAEs.<n>This work advances automated interpretability of language models without additional training costs.
arXiv Detail & Related papers (2026-01-30T05:41:19Z) - Catwalk: Unary Top-K for Efficient Ramp-No-Leak Neuron Design for Temporal Neural Networks [3.0670569650183928]
We propose a Catwalk neuron implementation by relocating spikes in a spike volley as a sorted subset cluster via unary top-k.<n>Catwalk is 1.39x and 1.86x better in area and power, respectively, as compared to existing0-RNL neurons.
arXiv Detail & Related papers (2025-08-28T23:50:36Z) - Multiplication-Free Parallelizable Spiking Neurons with Efficient Spatio-Temporal Dynamics [40.43988645674521]
Spiking Neural Networks (SNNs) are distinguished from Artificial Neural Networks (ANNs) for their complex neuronal dynamics and sparse binary activations (spikes) inspired by the biological neural system.<n>Traditional neuron models use iterative step-by-step dynamics, resulting in serial computation and slow training speed of SNNs.<n>Recently, parallelizable spiking neuron models have been proposed to fully utilize the massive parallel computing ability of graphics processing units to accelerate the training of SNNs.
arXiv Detail & Related papers (2025-01-24T13:44:08Z) - Gated Parametric Neuron for Spike-based Audio Recognition [26.124844943674407]
Spiking neural networks (SNNs) aim to simulate real neural networks in the human brain with biologically plausible neurons.
This paper proposes a leaky parametric neuron (GPN) to process-temporal information effectively with gating mechanism.
arXiv Detail & Related papers (2024-12-02T03:46:26Z) - Time-independent Spiking Neuron via Membrane Potential Estimation for Efficient Spiking Neural Networks [4.142699381024752]
computational inefficiency of spiking neural networks (SNNs) is primarily due to the sequential updates of membrane potential.
We propose Membrane Potential Estimation Parallel Spiking Neurons (MPE-PSN), a parallel computation method for spiking neurons.
Our approach exhibits promise for enhancing computational efficiency, particularly under conditions of elevated neuron density.
arXiv Detail & Related papers (2024-09-08T05:14:22Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons [1.7056768055368383]
Spiking neural networks (SNN) are able to learn features while using less energy, especially on neuromorphic hardware.
Most widely used neuron in deep learning is the temporal and Fire (LIF) neuron.
arXiv Detail & Related papers (2023-06-22T04:25:27Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z) - Training of Quantized Deep Neural Networks using a Magnetic Tunnel
Junction-Based Synapse [23.08163992580639]
Quantized neural networks (QNNs) are being actively researched as a solution for the computational complexity and memory intensity of deep neural networks.
We show how magnetic tunnel junction (MTJ) devices can be used to support QNN training.
We introduce a novel synapse circuit that uses the MTJ behavior to support the quantize update.
arXiv Detail & Related papers (2019-12-29T11:36:32Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.