Hybrid Synaptic Structure for Spiking Neural Network Realization
- URL: http://arxiv.org/abs/2311.07787v1
- Date: Mon, 13 Nov 2023 22:42:07 GMT
- Title: Hybrid Synaptic Structure for Spiking Neural Network Realization
- Authors: Sasan Razmkhah, Mustafa Altay Karamuftuoglu and Ali Bozbey
- Abstract summary: This paper introduces a compact SFQ-based synapse design that applies positive and negative weighted inputs to the JJ-Soma.
The JJ-Synapse can operate at ultra-high frequencies, exhibits orders of magnitude lower power consumption than CMOS counterparts, and can be conveniently fabricated using commercial Nb processes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks and neuromorphic computing play pivotal roles in deep
learning and machine vision. Due to their dissipative nature and inherent
limitations, traditional semiconductor-based circuits face challenges in
realizing ultra-fast and low-power neural networks. However, the spiking
behavior characteristic of single flux quantum (SFQ) circuits positions them as
promising candidates for spiking neural networks (SNNs). Our previous work
showcased a JJ-Soma design capable of operating at tens of gigahertz while
consuming only a fraction of the power compared to traditional circuits, as
documented in [1]. This paper introduces a compact SFQ-based synapse design
that applies positive and negative weighted inputs to the JJ-Soma. Using an
RSFQ synapse empowers us to replicate the functionality of a biological neuron,
a crucial step in realizing a complete SNN. The JJ-Synapse can operate at
ultra-high frequencies, exhibits orders of magnitude lower power consumption
than CMOS counterparts, and can be conveniently fabricated using commercial Nb
processes. Furthermore, the network's flexibility enables modifications by
incorporating cryo-CMOS circuits for weight value adjustments. In our endeavor,
we have successfully designed, fabricated, and partially tested the JJ-Synapse
within our cryocooler system. Integration with the JJ-Soma further facilitates
the realization of a high-speed inference SNN.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons [0.26963330643873434]
This paper introduces a spiking neural network (SNN) for a hardware-friendly physical reservoir computing (RC) on a complementary metal-oxide-semiconductor (CMOS) platform.
We demonstrate RC through short-term memory and exclusive OR tasks, and the spoken digit recognition task with an accuracy of 97.7%.
arXiv Detail & Related papers (2024-09-18T00:23:00Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia.
Current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring.
We are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs.
arXiv Detail & Related papers (2024-09-06T17:48:44Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Scalable Superconductor Neuron with Ternary Synaptic Connections for
Ultra-Fast SNN Hardware [4.216765320139095]
A novel high-fan-in differential superconductor neuron structure is designed for ultra-high-performance Spiking Neural Network (SNN) accelerators.
The proposed neuron design is based on superconductor electronics fabric, incorporating multiple superconducting loops, each with two Josephson Junctions.
The network exhibits a remarkable throughput of 8.92 GHz while consuming only 1.5 nJ per inference, including the energy consumption associated with cooling to 4K.
arXiv Detail & Related papers (2024-02-26T08:10:14Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - LogicNets: Co-Designed Neural Networks and Circuits for
Extreme-Throughput Applications [6.9276012494882835]
We present a novel method for designing neural network topologies that directly map to a highly efficient FPGA implementation.
We show that the combination of sparsity and low-bit activation quantization results in high-speed circuits with small logic depth and low LUT cost.
arXiv Detail & Related papers (2020-04-06T22:15:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.