Controllable reset behavior in domain wall-magnetic tunnel junction
artificial neurons for task-adaptable computation
- URL: http://arxiv.org/abs/2101.03095v1
- Date: Fri, 8 Jan 2021 16:50:29 GMT
- Title: Controllable reset behavior in domain wall-magnetic tunnel junction
artificial neurons for task-adaptable computation
- Authors: Samuel Liu, Christopher H. Bennett, Joseph S. Friedman, Matthew J.
Marinella, David Paydarfar, Jean Anne C. Incorvia
- Abstract summary: Domain wall-magnetic tunnel junction (DW-MTJ) devices have been shown to be able to intrinsically capture biological neuron behavior.
We show that edgy-relaxed behavior can be implemented in DW-MTJ artificial neurons via three alternative mechanisms.
- Score: 1.4505273244528207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic computing with spintronic devices has been of interest due to
the limitations of CMOS-driven von Neumann computing. Domain wall-magnetic
tunnel junction (DW-MTJ) devices have been shown to be able to intrinsically
capture biological neuron behavior. Edgy-relaxed behavior, where a frequently
firing neuron experiences a lower action potential threshold, may provide
additional artificial neuronal functionality when executing repeated tasks. In
this study, we demonstrate that this behavior can be implemented in DW-MTJ
artificial neurons via three alternative mechanisms: shape anisotropy, magnetic
field, and current-driven soft reset. Using micromagnetics and analytical
device modeling to classify the Optdigits handwritten digit dataset, we show
that edgy-relaxed behavior improves both classification accuracy and
classification rate for ordered datasets while sacrificing little to no
accuracy for a randomized dataset. This work establishes methods by which
artificial spintronic neurons can be flexibly adapted to datasets.
Related papers
- Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - WaLiN-GUI: a graphical and auditory tool for neuron-based encoding [73.88751967207419]
Neuromorphic computing relies on spike-based, energy-efficient communication.
We develop a tool to identify suitable configurations for neuron-based encoding of sample-based data into spike trains.
The WaLiN-GUI is provided open source and with documentation.
arXiv Detail & Related papers (2023-10-25T20:34:08Z) - Neuromorphic Hebbian learning with magnetic tunnel junction synapses [41.92764939721262]
We propose and experimentally demonstrate neuromorphic networks that provide high-accuracy inference thanks to the binary resistance states of magnetic tunnel junctions (MTJs)
We performed the first demonstration of a neuromorphic network directly implemented with MTJ synapses, for both inference and spike-timing-dependent plasticity learning.
We also demonstrated through simulation that the proposed system for unsupervised Hebbian learning with STT-MTJ synapses can achieve competitive accuracies for MNIST handwritten digit recognition.
arXiv Detail & Related papers (2023-08-21T19:58:44Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Shape-Dependent Multi-Weight Magnetic Artificial Synapses for
Neuromorphic Computing [4.567086462167893]
In neuromorphic computing, artificial synapses provide a multi-weight conductance state that is set based on inputs from neurons, analogous to the brain.
Here, we measure artificial synapses based on magnetic materials that use a magnetic tunnel junction and a magnetic domain wall.
arXiv Detail & Related papers (2021-11-22T20:27:14Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Intrinsic Spike Timing Dependent Plasticity in Stochastic Magnetic
Tunnel Junctions Mediated by Heat Dynamics [0.0]
Neuromorphic computing aims to mimic the behavior of biological neurons and synapses using solid-state devices and circuits.
We propose a method to implement the Spike Timing Dependent Plasticity (STDP) behavior of biological synapses in Magnetic Tunnel Junction (MTJ) devices.
arXiv Detail & Related papers (2021-08-28T18:02:01Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Domain Wall Leaky Integrate-and-Fire Neurons with Shape-Based
Configurable Activation Functions [9.975010985493256]
Spintronic devices exhibit both non-volatile and analog features, which are well-suited to neuromorphic computing.
These novel devices are at the forefront of beyond-CMOS artificial intelligence applications.
This work proposes modifications to these spintronic neurons that enable configuration of the activation functions.
arXiv Detail & Related papers (2020-11-11T21:07:02Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.