Neural Sampling Machine with Stochastic Synapse allows Brain-like
Learning and Inference
- URL: http://arxiv.org/abs/2102.10477v1
- Date: Sat, 20 Feb 2021 23:45:24 GMT
- Title: Neural Sampling Machine with Stochastic Synapse allows Brain-like
Learning and Inference
- Authors: Sourav Dutta, Georgios Detorakis, Abhishek Khanna, Benjamin Grisafe,
Emre Neftci and Suman Datta
- Abstract summary: We introduce a new class of NN called Neural-Sampling-Machine that exploits synapseity in synaptic connections for approximate Bayesian inference.
We experimentally show that the inherent switching of the selector element between the crossbar and metallic state introduces a multiplicative noise within the synapses of NSM.
We report a standard image classification task as well as estimation of data uncertainty in rotated samples.
- Score: 6.138129592577736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many real-world mission-critical applications require continual online
learning from noisy data and real-time decision making with a defined
confidence level. Probabilistic models and stochastic neural networks can
explicitly handle uncertainty in data and allow adaptive learning-on-the-fly,
but their implementation in a low-power substrate remains a challenge. Here, we
introduce a novel hardware fabric that implements a new class of stochastic NN
called Neural-Sampling-Machine that exploits stochasticity in synaptic
connections for approximate Bayesian inference. Harnessing the inherent
non-linearities and stochasticity occurring at the atomic level in emerging
materials and devices allows us to capture the synaptic stochasticity occurring
at the molecular level in biological synapses. We experimentally demonstrate
in-silico hybrid stochastic synapse by pairing a ferroelectric field-effect
transistor -based analog weight cell with a two-terminal stochastic selector
element. Such a stochastic synapse can be integrated within the
well-established crossbar array architecture for compute-in-memory. We
experimentally show that the inherent stochastic switching of the selector
element between the insulator and metallic state introduces a multiplicative
stochastic noise within the synapses of NSM that samples the conductance states
of the FeFET, both during learning and inference. We perform network-level
simulations to highlight the salient automatic weight normalization feature
introduced by the stochastic synapses of the NSM that paves the way for
continual online learning without any offline Batch Normalization. We also
showcase the Bayesian inferencing capability introduced by the stochastic
synapse during inference mode, thus accounting for uncertainty in data. We
report 98.25%accuracy on standard image classification task as well as
estimation of data uncertainty in rotated samples.
Related papers
- Heterogeneous quantization regularizes spiking neural network activity [0.0]
We present a data-blind neuromorphic signal conditioning strategy whereby analog data are normalized and quantized into spike phase representations.
We extend this mechanism by adding a data-aware calibration step whereby the range and density of the quantization weights adapt to accumulated input statistics.
arXiv Detail & Related papers (2024-09-27T02:25:44Z) - Neuromorphic Hebbian learning with magnetic tunnel junction synapses [41.92764939721262]
We propose and experimentally demonstrate neuromorphic networks that provide high-accuracy inference thanks to the binary resistance states of magnetic tunnel junctions (MTJs)
We performed the first demonstration of a neuromorphic network directly implemented with MTJ synapses, for both inference and spike-timing-dependent plasticity learning.
We also demonstrated through simulation that the proposed system for unsupervised Hebbian learning with STT-MTJ synapses can achieve competitive accuracies for MNIST handwritten digit recognition.
arXiv Detail & Related papers (2023-08-21T19:58:44Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Machine learning using magnetic stochastic synapses [0.9236074230806579]
We present a methodology for exploiting the traditionally detrimental effects in magnetic domain-wall motion in nanowires.
We demonstrate functional binary synapses alongside a gradient learning rule that allows their training with applicability to a range of systems.
For single measurements, the rule results in binary synapses with minimal neuronality, sacrificing potential performance for robustness.
This observation allows us to choose design principles depending on the desired performance and the device's operational speed and energy cost.
arXiv Detail & Related papers (2023-03-03T12:33:29Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Shape-Dependent Multi-Weight Magnetic Artificial Synapses for
Neuromorphic Computing [4.567086462167893]
In neuromorphic computing, artificial synapses provide a multi-weight conductance state that is set based on inputs from neurons, analogous to the brain.
Here, we measure artificial synapses based on magnetic materials that use a magnetic tunnel junction and a magnetic domain wall.
arXiv Detail & Related papers (2021-11-22T20:27:14Z) - Spatiotemporal Spike-Pattern Selectivity in Single Mixed-Signal Neurons
with Balanced Synapses [0.27998963147546135]
Mixed-signal neuromorphic processors could be used for inference and learning.
We show how inhomogeneous synaptic circuits could be utilized for resource-efficient implementation of network layers.
arXiv Detail & Related papers (2021-06-10T12:04:03Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.