Learning the Plasticity: Plasticity-Driven Learning Framework in Spiking
Neural Networks
- URL: http://arxiv.org/abs/2308.12063v2
- Date: Thu, 1 Feb 2024 13:28:46 GMT
- Title: Learning the Plasticity: Plasticity-Driven Learning Framework in Spiking
Neural Networks
- Authors: Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Feifei Zhao and Yi
Zeng
- Abstract summary: New paradigm for Spiking Neural Networks (SNNs)
Plasticity-Driven Learning Framework (PDLF)
PDLF redefines concepts of functional and Presynaptic-Dependent Plasticity.
- Score: 9.25919593660244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of the human brain has led to the development of complex
synaptic plasticity, enabling dynamic adaptation to a constantly evolving
world. This progress inspires our exploration into a new paradigm for Spiking
Neural Networks (SNNs): a Plasticity-Driven Learning Framework (PDLF). This
paradigm diverges from traditional neural network models that primarily focus
on direct training of synaptic weights, leading to static connections that
limit adaptability in dynamic environments. Instead, our approach delves into
the heart of synaptic behavior, prioritizing the learning of plasticity rules
themselves. This shift in focus from weight adjustment to mastering the
intricacies of synaptic change offers a more flexible and dynamic pathway for
neural networks to evolve and adapt. Our PDLF does not merely adapt existing
concepts of functional and Presynaptic-Dependent Plasticity but redefines them,
aligning closely with the dynamic and adaptive nature of biological learning.
This reorientation enhances key cognitive abilities in artificial intelligence
systems, such as working memory and multitasking capabilities, and demonstrates
superior adaptability in complex, real-world scenarios. Moreover, our framework
sheds light on the intricate relationships between various forms of plasticity
and cognitive functions, thereby contributing to a deeper understanding of the
brain's learning mechanisms. Integrating this groundbreaking plasticity-centric
approach in SNNs marks a significant advancement in the fusion of neuroscience
and artificial intelligence. It paves the way for developing AI systems that
not only learn but also adapt in an ever-changing world, much like the human
brain.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Neuroplastic Expansion in Deep Reinforcement Learning [9.297543779239826]
We propose a novel approach, Neuroplastic Expansion (NE), inspired by cortical expansion in cognitive science.
NE maintains learnability and adaptability throughout the entire training process by dynamically growing the network from a smaller initial size to its full dimension.
Our method is designed with three key components: (1) elastic neuron generation based on potential gradients, (2) dormant neuron pruning to optimize network expressivity, and (3) neuron consolidation via experience review.
arXiv Detail & Related papers (2024-10-10T14:51:14Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Evolving Self-Assembling Neural Networks: From Spontaneous Activity to Experience-Dependent Learning [7.479827648985631]
We propose a class of self-organizing neural networks capable of synaptic and structural plasticity in an activity and reward-dependent manner.
Our results demonstrate the ability of the model to learn from experiences in different control tasks starting from randomly connected or empty networks.
arXiv Detail & Related papers (2024-06-14T07:36:21Z) - Theories of synaptic memory consolidation and intelligent plasticity for continual learning [7.573586022424398]
synaptic plasticity mechanisms must maintain and evolve an internal state.
plasticity algorithms must leverage the internal state to intelligently regulate plasticity at individual synapses.
arXiv Detail & Related papers (2024-05-27T08:13:39Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks [11.730984231143108]
Developmental plasticity plays prominent role in shaping the brain's structure during ongoing learning.
Existing network compression methods for deep artificial neural networks (ANNs) and spiking neural networks (SNNs) draw little inspiration from brain's developmental plasticity mechanisms.
This paper proposes a developmental plasticity-inspired adaptive pruning (DPAP) method, with inspiration from the adaptive developmental pruning of dendritic spines, synapses, and neurons.
arXiv Detail & Related papers (2022-11-23T05:26:51Z) - Learning to acquire novel cognitive tasks with evolution, plasticity and
meta-meta-learning [3.8073142980733]
In meta-learning, networks are trained with external algorithms to learn tasks that require acquiring, storing and exploiting unpredictable information for each new instance of the task.
Here we evolve neural networks, endowed with plastic connections, over a sizable set of simple meta-learning tasks based on a neuroscience modelling framework.
The resulting evolved network can automatically acquire a novel simple cognitive task, never seen during training, through the spontaneous operation of its evolved neural organization and plasticity structure.
arXiv Detail & Related papers (2021-12-16T03:18:01Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.