Optimality of short-term synaptic plasticity in modelling certain
dynamic environments
- URL: http://arxiv.org/abs/2009.06808v2
- Date: Tue, 15 Jun 2021 22:14:34 GMT
- Title: Optimality of short-term synaptic plasticity in modelling certain
dynamic environments
- Authors: Timoleon Moraitis, Abu Sebastian, Evangelos Eleftheriou (IBM Research
- Zurich)
- Abstract summary: Bayes-optimal prediction and inference of randomly but continuously transforming environments relies on short-term spike-timing-dependent plasticity.
Strikingly, this also introduces a biologically-modelled AI, the first to overcome multiple limitations of deep learning and outperform artificial neural networks in a visual task.
Results link short-term plasticity to high-level cortical function, suggest optimality of natural intelligence for natural environments, and neuromorphic AI from mere efficiency to computational supremacy altogether.
- Score: 0.5371337604556311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biological neurons and their in-silico emulations for neuromorphic artificial
intelligence (AI) use extraordinarily energy-efficient mechanisms, such as
spike-based communication and local synaptic plasticity. It remains unclear
whether these neuronal mechanisms only offer efficiency or also underlie the
superiority of biological intelligence. Here, we prove rigorously that, indeed,
the Bayes-optimal prediction and inference of randomly but continuously
transforming environments, a common natural setting, relies on short-term
spike-timing-dependent plasticity, a hallmark of biological synapses. Further,
this dynamic Bayesian inference through plasticity enables circuits of the
cerebral cortex in simulations to recognize previously unseen, highly distorted
dynamic stimuli. Strikingly, this also introduces a biologically-modelled AI,
the first to overcome multiple limitations of deep learning and outperform
artificial neural networks in a visual task. The cortical-like network is
spiking and event-based, trained only with unsupervised and local plasticity,
on a small, narrow, and static training dataset, but achieves recognition of
unseen, transformed, and dynamic data better than deep neural networks with
continuous activations, trained with supervised backpropagation on the
transforming data. These results link short-term plasticity to high-level
cortical function, suggest optimality of natural intelligence for natural
environments, and repurpose neuromorphic AI from mere efficiency to
computational supremacy altogether.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Evolving Self-Assembling Neural Networks: From Spontaneous Activity to Experience-Dependent Learning [7.479827648985631]
We propose a class of self-organizing neural networks capable of synaptic and structural plasticity in an activity and reward-dependent manner.
Our results demonstrate the ability of the model to learn from experiences in different control tasks starting from randomly connected or empty networks.
arXiv Detail & Related papers (2024-06-14T07:36:21Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks [11.730984231143108]
Developmental plasticity plays prominent role in shaping the brain's structure during ongoing learning.
Existing network compression methods for deep artificial neural networks (ANNs) and spiking neural networks (SNNs) draw little inspiration from brain's developmental plasticity mechanisms.
This paper proposes a developmental plasticity-inspired adaptive pruning (DPAP) method, with inspiration from the adaptive developmental pruning of dendritic spines, synapses, and neurons.
arXiv Detail & Related papers (2022-11-23T05:26:51Z) - Towards efficient end-to-end speech recognition with
biologically-inspired neural networks [10.457580011403289]
We introduce neural connectivity concepts emulating the axo-somatic and the axo-axonic synapses.
We demonstrate for the first time, that a biologically realistic implementation of a large-scale ASR model can yield competitive performance levels.
arXiv Detail & Related papers (2021-10-04T21:24:10Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.