Oja's plasticity rule overcomes several challenges of training neural networks under biological constraints
- URL: http://arxiv.org/abs/2408.08408v2
- Date: Fri, 11 Oct 2024 15:25:39 GMT
- Title: Oja's plasticity rule overcomes several challenges of training neural networks under biological constraints
- Authors: Navid Shervani-Tabar, Marzieh Alireza Mirhoseini, Robert Rosenbaum,
- Abstract summary: We show that Oja's plasticity rule partly overcomes the need for some engineering tricks.
Specifically, under difficult, but biologically realistic learning scenarios such as online learning, Oja's rule can substantially improve the performance of pure backpropagation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a large literature on the similarities and differences between biological neural circuits and deep artificial neural networks (DNNs). However, modern training of DNNs relies on several engineering tricks such as data batching, normalization, adaptive optimizers, and precise weight initialization. Despite their critical role in training DNNs, these engineering tricks are often overlooked when drawing parallels between biological and artificial networks, potentially due to a lack of evidence for their direct biological implementation. In this study, we show that Oja's plasticity rule partly overcomes the need for some engineering tricks. Specifically, under difficult, but biologically realistic learning scenarios such as online learning, deep architectures, and sub-optimal weight initialization, Oja's rule can substantially improve the performance of pure backpropagation. Our results demonstrate that simple synaptic plasticity rules can overcome challenges to learning that are typically overcome using less biologically plausible approaches when training DNNs.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Understanding plasticity in neural networks [41.79540750236036]
Plasticity is the ability of a neural network to quickly change its predictions in response to new information.
Deep neural networks are known to lose plasticity over the course of training even in relatively simple learning problems.
arXiv Detail & Related papers (2023-03-02T18:47:51Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - SpikePropamine: Differentiable Plasticity in Spiking Neural Networks [0.0]
We introduce a framework for learning the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in Spiking Neural Networks (SNNs)
We show that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks.
These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task.
arXiv Detail & Related papers (2021-06-04T19:29:07Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Brain-Inspired Learning on Neuromorphic Substrates [5.279475826661643]
This article provides a mathematical framework for the design of practical online learning algorithms for neuromorphic substrates.
Specifically, we show a direct connection between Real-Time Recurrent Learning (RTRL) and biologically plausible learning rules for training Spiking Neural Networks (SNNs)
We motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity.
arXiv Detail & Related papers (2020-10-22T17:56:59Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Synaptic Metaplasticity in Binarized Neural Networks [4.243926243206826]
Deep neural networks are prone to catastrophic forgetting upon training a new task.
We propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that reduces catastrophic forgetting without needing previously presented data.
This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems.
arXiv Detail & Related papers (2020-03-07T08:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.