Learning in Deep Neural Networks Using a Biologically Inspired Optimizer
- URL: http://arxiv.org/abs/2104.11604v1
- Date: Fri, 23 Apr 2021 13:50:30 GMT
- Title: Learning in Deep Neural Networks Using a Biologically Inspired Optimizer
- Authors: Giorgia Dellaferrera, Stanislaw Wozniak, Giacomo Indiveri, Angeliki
Pantazi, Evangelos Eleftheriou
- Abstract summary: We propose a novel biologically inspired for artificial (ANNs) and spiking neural networks (SNNs)
GRAPES implements a weight-distribution dependent modulation of the error signal at each node of the neural network.
We show that this biologically inspired mechanism leads to a systematic improvement of the convergence rate of the network, and substantially improves classification accuracy of ANNs and SNNs.
- Score: 5.144809478361604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plasticity circuits in the brain are known to be influenced by the
distribution of the synaptic weights through the mechanisms of synaptic
integration and local regulation of synaptic strength. However, the complex
interplay of stimulation-dependent plasticity with local learning signals is
disregarded by most of the artificial neural network training algorithms
devised so far. Here, we propose a novel biologically inspired optimizer for
artificial (ANNs) and spiking neural networks (SNNs) that incorporates key
principles of synaptic integration observed in dendrites of cortical neurons:
GRAPES (Group Responsibility for Adjusting the Propagation of Error Signals).
GRAPES implements a weight-distribution dependent modulation of the error
signal at each node of the neural network. We show that this biologically
inspired mechanism leads to a systematic improvement of the convergence rate of
the network, and substantially improves classification accuracy of ANNs and
SNNs with both feedforward and recurrent architectures. Furthermore, we
demonstrate that GRAPES supports performance scalability for models of
increasing complexity and mitigates catastrophic forgetting by enabling
networks to generalize to unseen tasks based on previously acquired knowledge.
The local characteristics of GRAPES minimize the required memory resources,
making it optimally suited for dedicated hardware implementations. Overall, our
work indicates that reconciling neurophysiology insights with machine
intelligence is key to boosting the performance of neural networks.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Exploiting Heterogeneity in Timescales for Sparse Recurrent Spiking Neural Networks for Energy-Efficient Edge Computing [16.60622265961373]
Spiking Neural Networks (SNNs) represent the forefront of neuromorphic computing.
This paper weaves together three groundbreaking studies that revolutionize SNN performance.
arXiv Detail & Related papers (2024-07-08T23:33:12Z) - Expressivity of Neural Networks with Random Weights and Learned Biases [44.02417750529102]
Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters.
We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only.
Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights.
arXiv Detail & Related papers (2024-07-01T04:25:49Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A Spiking Binary Neuron -- Detector of Causal Links [0.0]
Causal relationship recognition is a fundamental operation in neural networks aimed at learning behavior, action planning, and inferring external world dynamics.
This research paper presents a novel approach to realize causal relationship recognition using a simple spiking binary neuron.
arXiv Detail & Related papers (2023-09-15T15:34:17Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - A Synapse-Threshold Synergistic Learning Approach for Spiking Neural
Networks [1.8556712517882232]
Spiking neural networks (SNNs) have demonstrated excellent capabilities in various intelligent scenarios.
In this study, we develop a novel synergistic learning approach that involves simultaneously training synaptic weights and spike thresholds in SNNs.
arXiv Detail & Related papers (2022-06-10T06:41:36Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.