The principles of adaptation in organisms and machines II:
Thermodynamics of the Bayesian brain
- URL: http://arxiv.org/abs/2006.13158v1
- Date: Tue, 23 Jun 2020 16:57:46 GMT
- Title: The principles of adaptation in organisms and machines II:
Thermodynamics of the Bayesian brain
- Authors: Hideaki Shimazaki
- Abstract summary: The article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference.
We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article reviews how organisms learn and recognize the world through the
dynamics of neural networks from the perspective of Bayesian inference, and
introduces a view on how such dynamics is described by the laws for the entropy
of neural activity, a paradigm that we call thermodynamics of the Bayesian
brain. The Bayesian brain hypothesis sees the stimulus-evoked activity of
neurons as an act of constructing the Bayesian posterior distribution based on
the generative model of the external world that an organism possesses. A closer
look at the stimulus-evoked activity at early sensory cortices reveals that
feedforward connections initially mediate the stimulus-response, which is later
modulated by input from recurrent connections. Importantly, not the initial
response, but the delayed modulation expresses animals' cognitive states such
as awareness and attention regarding the stimulus. Using a simple generative
model made of a spiking neural population, we reproduce the stimulus-evoked
dynamics with the delayed feedback modulation as the process of the Bayesian
inference that integrates the stimulus evidence and a prior knowledge with
time-delay. We then introduce a thermodynamic view on this process based on the
laws for the entropy of neural activity. This view elucidates that the process
of the Bayesian inference works as the recently-proposed information-theoretic
engine (neural engine, an analogue of a heat engine in thermodynamics), which
allows us to quantify the perceptual capacity expressed in the delayed
modulation in terms of entropy.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Stimulus-to-Stimulus Learning in RNNs with Cortical Inductive Biases [0.0]
We propose a recurrent neural network model of stimulus substitution using two forms of inductive bias pervasive in the cortex.
We show that the model generates a wide array of conditioning phenomena, and can learn large numbers of associations with an amount of training.
Our framework highlights the importance of multi-compartment neuronal processing in the cortex, and showcases how it might confer cortical animals the evolutionary edge.
arXiv Detail & Related papers (2024-09-20T13:01:29Z) - Confidence Regulation Neurons in Language Models [91.90337752432075]
This study investigates the mechanisms by which large language models represent and regulate uncertainty in next-token predictions.
Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits.
token frequency neurons, which we describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution.
arXiv Detail & Related papers (2024-06-24T01:31:03Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Formalising the Use of the Activation Function in Neural Inference [0.0]
We discuss how a spike in a biological neurone belongs to a particular class of phase transitions in statistical physics.
We show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics.
This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning.
arXiv Detail & Related papers (2021-02-02T19:42:21Z) - Neuronal Sequence Models for Bayesian Online Inference [0.0]
Sequential neuronal activity underlies a wide range of processes in the brain.
Neuroscientific evidence for neuronal sequences has been reported in domains as diverse as perception, motor control, speech, spatial navigation and memory.
We review key findings about neuronal sequences and relate these to the concept of online inference on sequences as a model of sensory-motor processing and recognition.
arXiv Detail & Related papers (2020-04-02T10:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.