Domain Wall Leaky Integrate-and-Fire Neurons with Shape-Based
Configurable Activation Functions
- URL: http://arxiv.org/abs/2011.06075v1
- Date: Wed, 11 Nov 2020 21:07:02 GMT
- Title: Domain Wall Leaky Integrate-and-Fire Neurons with Shape-Based
Configurable Activation Functions
- Authors: Wesley H. Brigner, Naimul Hassan, Xuan Hu, Christopher H. Bennett,
Felipe Garcia-Sanchez, Can Cui, Alvaro Velasquez, Matthew J. Marinella, Jean
Anne C. Incorvia, Joseph S. Friedman
- Abstract summary: Spintronic devices exhibit both non-volatile and analog features, which are well-suited to neuromorphic computing.
These novel devices are at the forefront of beyond-CMOS artificial intelligence applications.
This work proposes modifications to these spintronic neurons that enable configuration of the activation functions.
- Score: 9.975010985493256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complementary metal oxide semiconductor (CMOS) devices display volatile
characteristics, and are not well suited for analog applications such as
neuromorphic computing. Spintronic devices, on the other hand, exhibit both
non-volatile and analog features, which are well-suited to neuromorphic
computing. Consequently, these novel devices are at the forefront of
beyond-CMOS artificial intelligence applications. However, a large quantity of
these artificial neuromorphic devices still require the use of CMOS, which
decreases the efficiency of the system. To resolve this, we have previously
proposed a number of artificial neurons and synapses that do not require CMOS
for operation. Although these devices are a significant improvement over
previous renditions, their ability to enable neural network learning and
recognition is limited by their intrinsic activation functions. This work
proposes modifications to these spintronic neurons that enable configuration of
the activation functions through control of the shape of a magnetic domain wall
track. Linear and sigmoidal activation functions are demonstrated in this work,
which can be extended through a similar approach to enable a wide variety of
activation functions.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - A perspective on physical reservoir computing with nanomagnetic devices [1.9007022664972197]
We focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices.
We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
arXiv Detail & Related papers (2022-12-09T13:43:21Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - Parametrized constant-depth quantum neuron [56.51261027148046]
We propose a framework that builds quantum neurons based on kernel machines.
We present here a neuron that applies a tensor-product feature mapping to an exponentially larger space.
It turns out that parametrization allows the proposed neuron to optimally fit underlying patterns that the existing neuron cannot fit.
arXiv Detail & Related papers (2022-02-25T04:57:41Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Growing Cosine Unit: A Novel Oscillatory Activation Function That Can
Speedup Training and Reduce Parameters in Convolutional Neural Networks [0.1529342790344802]
Convolution neural networks have been successful in solving many socially important and economically significant problems.
Key discovery that made training deep networks feasible was the adoption of the Rectified Linear Unit (ReLU) activation function.
New activation function C(z) = z cos z outperforms Sigmoids, Swish, Mish and ReLU on a variety of architectures.
arXiv Detail & Related papers (2021-08-30T01:07:05Z) - Controllable reset behavior in domain wall-magnetic tunnel junction
artificial neurons for task-adaptable computation [1.4505273244528207]
Domain wall-magnetic tunnel junction (DW-MTJ) devices have been shown to be able to intrinsically capture biological neuron behavior.
We show that edgy-relaxed behavior can be implemented in DW-MTJ artificial neurons via three alternative mechanisms.
arXiv Detail & Related papers (2021-01-08T16:50:29Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z) - CMOS-Free Multilayer Perceptron Enabled by Four-Terminal MTJ Device [1.3381026029365257]
Neuromorphic computing promises revolutionary improvements over conventional systems for applications that process unstructured information.
This work proposes a new spintronic neuron that enables purely spintronic multilayer perceptrons.
arXiv Detail & Related papers (2020-02-03T16:14:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.