The principles of adaptation in organisms and machines II:
Thermodynamics of the Bayesian brain
- URL: http://arxiv.org/abs/2006.13158v1
- Date: Tue, 23 Jun 2020 16:57:46 GMT
- Title: The principles of adaptation in organisms and machines II:
Thermodynamics of the Bayesian brain
- Authors: Hideaki Shimazaki
- Abstract summary: The article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference.
We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article reviews how organisms learn and recognize the world through the
dynamics of neural networks from the perspective of Bayesian inference, and
introduces a view on how such dynamics is described by the laws for the entropy
of neural activity, a paradigm that we call thermodynamics of the Bayesian
brain. The Bayesian brain hypothesis sees the stimulus-evoked activity of
neurons as an act of constructing the Bayesian posterior distribution based on
the generative model of the external world that an organism possesses. A closer
look at the stimulus-evoked activity at early sensory cortices reveals that
feedforward connections initially mediate the stimulus-response, which is later
modulated by input from recurrent connections. Importantly, not the initial
response, but the delayed modulation expresses animals' cognitive states such
as awareness and attention regarding the stimulus. Using a simple generative
model made of a spiking neural population, we reproduce the stimulus-evoked
dynamics with the delayed feedback modulation as the process of the Bayesian
inference that integrates the stimulus evidence and a prior knowledge with
time-delay. We then introduce a thermodynamic view on this process based on the
laws for the entropy of neural activity. This view elucidates that the process
of the Bayesian inference works as the recently-proposed information-theoretic
engine (neural engine, an analogue of a heat engine in thermodynamics), which
allows us to quantify the perceptual capacity expressed in the delayed
modulation in terms of entropy.
Related papers
- Confidence Regulation Neurons in Language Models [91.90337752432075]
This study investigates the mechanisms by which large language models represent and regulate uncertainty in next-token predictions.
Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits.
token frequency neurons, which we describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution.
arXiv Detail & Related papers (2024-06-24T01:31:03Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - The Neuron as a Direct Data-Driven Controller [43.8450722109081]
This study extends the current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers.
We model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states and optimize control.
Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a novel and biologically-informed fundamental unit for constructing neural networks.
arXiv Detail & Related papers (2024-01-03T01:24:10Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Formalising the Use of the Activation Function in Neural Inference [0.0]
We discuss how a spike in a biological neurone belongs to a particular class of phase transitions in statistical physics.
We show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics.
This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning.
arXiv Detail & Related papers (2021-02-02T19:42:21Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Neuronal Sequence Models for Bayesian Online Inference [0.0]
Sequential neuronal activity underlies a wide range of processes in the brain.
Neuroscientific evidence for neuronal sequences has been reported in domains as diverse as perception, motor control, speech, spatial navigation and memory.
We review key findings about neuronal sequences and relate these to the concept of online inference on sequences as a model of sensory-motor processing and recognition.
arXiv Detail & Related papers (2020-04-02T10:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.