Interpreting learning in biological neural networks as zero-order
optimization method
- URL: http://arxiv.org/abs/2301.11777v2
- Date: Thu, 23 Mar 2023 13:28:58 GMT
- Title: Interpreting learning in biological neural networks as zero-order
optimization method
- Authors: Johannes Schmidt-Hieber
- Abstract summary: In this work, we look at the brain as a statistical method for supervised learning.
The main contribution is to relate the local updating rule of the connection parameters in BNNs to a zero-order optimization method.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, significant progress has been made regarding the statistical
understanding of artificial neural networks (ANNs). ANNs are motivated by the
functioning of the brain, but differ in several crucial aspects. In particular,
the locality in the updating rule of the connection parameters in biological
neural networks (BNNs) makes it biologically implausible that the learning of
the brain is based on gradient descent. In this work, we look at the brain as a
statistical method for supervised learning. The main contribution is to relate
the local updating rule of the connection parameters in BNNs to a zero-order
optimization method. It is shown that the expected values of the iterates
implement a modification of gradient descent.
Related papers
- Integrating Causality with Neurochaos Learning: Proposed Approach and Research Agenda [1.534667887016089]
We investigate how causal and neurochaos learning approaches can be integrated together to produce better results.
We propose an approach for this integration to enhance classification, prediction and reinforcement learning.
arXiv Detail & Related papers (2025-01-23T15:45:29Z) - Gated Parametric Neuron for Spike-based Audio Recognition [26.124844943674407]
Spiking neural networks (SNNs) aim to simulate real neural networks in the human brain with biologically plausible neurons.
This paper proposes a leaky parametric neuron (GPN) to process-temporal information effectively with gating mechanism.
arXiv Detail & Related papers (2024-12-02T03:46:26Z) - Fractional-order spike-timing-dependent gradient descent for multi-layer spiking neural networks [18.142378139047977]
This paper proposes a fractional-order spike-timing-dependent gradient descent (FOSTDGD) learning model.
It is tested on theNIST and DVS128 Gesture datasets and its accuracy under different network structure and fractional orders is analyzed.
arXiv Detail & Related papers (2024-10-20T05:31:34Z) - Growing Deep Neural Network Considering with Similarity between Neurons [4.32776344138537]
We explore a novel approach of progressively increasing neuron numbers in compact models during training phases.
We propose a method that reduces feature extraction biases and neuronal redundancy by introducing constraints based on neuron similarity distributions.
Results on CIFAR-10 and CIFAR-100 datasets demonstrated accuracy improvement.
arXiv Detail & Related papers (2024-08-23T11:16:37Z) - Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? An analysis using stochastic processes [0.0]
We study a model for supervised learning in biological neural networks (BNNs)
We show that a gradient step occurs approximately when each learning opportunity is processed by many local updates.
This result suggests that gradient descent may indeed play a role in optimizing BNNs.
arXiv Detail & Related papers (2023-09-10T18:12:52Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - aSTDP: A More Biologically Plausible Learning [0.0]
We introduce approximate STDP, a new neural networks learning framework.
It uses only STDP rules for supervised and unsupervised learning.
It can make predictions or generate patterns in one model without additional configuration.
arXiv Detail & Related papers (2022-05-22T08:12:50Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.