High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural
Network
- URL: http://arxiv.org/abs/2107.02238v2
- Date: Sat, 1 Oct 2022 00:15:02 GMT
- Title: High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural
Network
- Authors: Pranav O. Mathews and Christian B. Duffee and Abel Thayil and Ty E.
Stovall and Christopher H. Bennett and Felipe Garcia-Sanchez and Matthew J.
Marinella and Jean Anne C. Incorvia and Naimul Hassan and Xuan Hu and Joseph
S. Friedman
- Abstract summary: Neuromorphic computing systems overcome the limitations of traditional von Neumann computing architectures.
Recent research has demonstrated memristors and spintronic devices in various neural network designs boost efficiency and speed.
This paper presents a biologically inspired fully spintronic neuron used in a fully spintronic Hopfield RNN.
- Score: 1.1965429476528429
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic computing systems overcome the limitations of traditional von
Neumann computing architectures. These computing systems can be further
improved upon by using emerging technologies that are more efficient than CMOS
for neural computation. Recent research has demonstrated memristors and
spintronic devices in various neural network designs boost efficiency and
speed. This paper presents a biologically inspired fully spintronic neuron used
in a fully spintronic Hopfield RNN. The network is used to solve tasks, and the
results are compared against those of current Hopfield neuromorphic
architectures which use emerging technologies.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Astrocyte-Integrated Dynamic Function Exchange in Spiking Neural
Networks [0.0]
This paper presents an innovative methodology for improving the robustness and computational efficiency of Spiking Neural Networks (SNNs)
The proposed approach integrates astrocytes, a type of glial cell prevalent in the human brain, into SNNs, creating astrocyte-augmented networks.
Notably, our astrocyte-augmented SNN displays near-zero latency and theoretically infinite throughput, implying exceptional computational efficiency.
arXiv Detail & Related papers (2023-09-15T08:02:29Z) - Computational and Storage Efficient Quadratic Neurons for Deep Neural
Networks [10.379191500493503]
Experimental results have demonstrated that the proposed quadratic neuron structure exhibits superior computational and storage efficiency across various tasks.
This work introduces an efficient quadratic neuron architecture distinguished by its enhanced utilization of second-order computational information.
arXiv Detail & Related papers (2023-06-10T11:25:31Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.