Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks
- URL: http://arxiv.org/abs/2402.16628v1
- Date: Mon, 26 Feb 2024 15:01:54 GMT
- Title: Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks
- Authors: Christoph Weilenmann, Alexandros Ziogas, Till Zellweger, Kevin
Portner, Marko Mladenovi\'c, Manasa Kaniselvan, Timoleon Moraitis, Mathieu
Luisier, Alexandros Emboras
- Abstract summary: We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
- Score: 71.79257685917058
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Biological neural networks do not only include long-term memory and weight
multiplication capabilities, as commonly assumed in artificial neural networks,
but also more complex functions such as short-term memory, short-term
plasticity, and meta-plasticity - all collocated within each synapse. Here, we
demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all
these synaptic functions. These memristors operate in a non-filamentary, low
conductance regime, which enables stable and energy efficient operation. They
can act as multi-functional hardware synapses in a class of bio-inspired deep
neural networks (DNN) that make use of both long- and short-term synaptic
dynamics and are capable of meta-learning or "learning-to-learn". The resulting
bio-inspired DNN is then trained to play the video game Atari Pong, a complex
reinforcement learning task in a dynamic environment. Our analysis shows that
the energy consumption of the DNN with multi-functional memristive synapses
decreases by about two orders of magnitude as compared to a pure GPU
implementation. Based on this finding, we infer that memristive devices with a
better emulation of the synaptic functionalities do not only broaden the
applicability of neuromorphic computing, but could also improve the performance
and energy costs of certain artificial intelligence applications.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Sequence learning in a spiking neuronal network with memristive synapses [0.0]
A core concept that lies at the heart of brain computation is sequence learning and prediction.
Neuromorphic hardware emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate.
We study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model.
arXiv Detail & Related papers (2022-11-29T21:07:23Z) - Short-Term Plasticity Neurons Learning to Learn and Forget [0.0]
Short-term plasticity (STP) is a mechanism that stores decaying memories in synapses of the cerebral cortex.
Here we present a new type of recurrent neural unit, the Atari Neuron (STPN), which indeed turns out strikingly powerful.
arXiv Detail & Related papers (2022-06-28T14:47:56Z) - Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays [3.5969667977870796]
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs)
Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes.
We obtain highly competitive accuracy amongst previously reported lightweight dense fully MSNNs on several benchmarks.
arXiv Detail & Related papers (2022-06-26T23:13:34Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Optimality of short-term synaptic plasticity in modelling certain
dynamic environments [0.5371337604556311]
Bayes-optimal prediction and inference of randomly but continuously transforming environments relies on short-term spike-timing-dependent plasticity.
Strikingly, this also introduces a biologically-modelled AI, the first to overcome multiple limitations of deep learning and outperform artificial neural networks in a visual task.
Results link short-term plasticity to high-level cortical function, suggest optimality of natural intelligence for natural environments, and neuromorphic AI from mere efficiency to computational supremacy altogether.
arXiv Detail & Related papers (2020-09-15T01:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.