Formalising the Use of the Activation Function in Neural Inference
- URL: http://arxiv.org/abs/2102.04896v1
- Date: Tue, 2 Feb 2021 19:42:21 GMT
- Title: Formalising the Use of the Activation Function in Neural Inference
- Authors: Dalton A R Sakthivadivel
- Abstract summary: We discuss how a spike in a biological neurone belongs to a particular class of phase transitions in statistical physics.
We show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics.
This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We investigate how activation functions can be used to describe neural firing
in an abstract way, and in turn, why they work well in artificial neural
networks. We discuss how a spike in a biological neurone belongs to a
particular universality class of phase transitions in statistical physics. We
then show that the artificial neurone is, mathematically, a mean field model of
biological neural membrane dynamics, which arises from modelling spiking as a
phase transition. This allows us to treat selective neural firing in an
abstract way, and formalise the role of the activation function in perceptron
learning. Along with deriving this model and specifying the analogous neural
case, we analyse the phase transition to understand the physics of neural
network learning. Together, it is show that there is not only a biological
meaning, but a physical justification, for the emergence and performance of
canonical activation functions; implications for neural learning and inference
are also discussed.
Related papers
- Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function [2.66269503676104]
Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
arXiv Detail & Related papers (2023-11-06T20:31:01Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design [10.421465303670638]
This document introduces a hypothetical framework for the functional nature of primitive neural networks.
It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world.
It achieves this without participating in an algorithmic structure.
arXiv Detail & Related papers (2021-05-21T05:59:27Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z) - Advantages of biologically-inspired adaptive neural activation in RNNs
during learning [10.357949759642816]
We introduce a novel parametric family of nonlinear activation functions inspired by input-frequency response curves of biological neurons.
We find that activation adaptation provides distinct task-specific solutions and in some cases, improves both learning speed and performance.
arXiv Detail & Related papers (2020-06-22T13:49:52Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.