Applications of the Free Energy Principle to Machine Learning and
Neuroscience
- URL: http://arxiv.org/abs/2107.00140v1
- Date: Wed, 30 Jun 2021 22:53:03 GMT
- Title: Applications of the Free Energy Principle to Machine Learning and
Neuroscience
- Authors: Beren Millidge
- Abstract summary: We explore and apply methods inspired by the free energy principle to two important areas in machine learning and neuroscience.
We focus on predictive coding, a neurobiologically plausible process theory derived from the free energy principle.
Secondly, we study active inference, a neurobiologically grounded account of action through variational message passing.
Finally, we investigate biologically plausible methods of credit assignment in the brain.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this PhD thesis, we explore and apply methods inspired by the free energy
principle to two important areas in machine learning and neuroscience. The free
energy principle is a general mathematical theory of the necessary
information-theoretic behaviours of systems that maintain a separation from
their environment. A core postulate of the theory is that complex systems can
be seen as performing variational Bayesian inference and minimizing an
information-theoretic quantity called the variational free energy. The thesis
is structured into three independent sections. Firstly, we focus on predictive
coding, a neurobiologically plausible process theory derived from the free
energy principle which argues that the primary function of the brain is to
minimize prediction errors, showing how predictive coding can be scaled up and
extended to be more biologically plausible, and elucidating its close links
with other methods such as Kalman Filtering. Secondly, we study active
inference, a neurobiologically grounded account of action through variational
message passing, and investigate how these methods can be scaled up to match
the performance of deep reinforcement learning methods. We additionally provide
a detailed mathematical understanding of the nature and origin of the
information-theoretic objectives that underlie exploratory behaviour. Finally,
we investigate biologically plausible methods of credit assignment in the
brain. We first demonstrate a close link between predictive coding and the
backpropagation of error algorithm. We go on to propose novel and simpler
algorithms which allow for backprop to be implemented in purely local,
biologically plausible computations.
Related papers
- Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Duality Principle and Biologically Plausible Learning: Connecting the
Representer Theorem and Hebbian Learning [15.094554860151103]
We argue that the Representer theorem offers the perfect lens to study biologically plausible learning algorithms.
Our work sheds light on the pivotal role of the Representer theorem in advancing our comprehension of neural computation.
arXiv Detail & Related papers (2023-08-02T20:21:18Z) - A theory of learning with constrained weight-distribution [17.492950552276067]
We develop a statistical mechanical theory of learning in neural networks that incorporates structural information as constraints.
We show that training in our algorithm can be interpreted as geodesic flows in the Wasserstein space of probability distributions.
Our theory and algorithm provide novel strategies for incorporating prior knowledge about weights into learning, and reveal a powerful connection between structure and function in neural networks.
arXiv Detail & Related papers (2022-06-14T00:43:34Z) - Predictive Coding and Stochastic Resonance: Towards a Unified Theory of
Auditory (Phantom) Perception [6.416574036611064]
To gain a mechanistic understanding of brain function, hypothesis driven experiments should be accompanied by biologically plausible computational models.
With a special focus on tinnitus, we review recent work at the intersection of artificial intelligence, psychology, and neuroscience.
We conclude that two fundamental processing principles - being ubiquitous in the brain - best fit to a vast number of experimental results.
arXiv Detail & Related papers (2022-04-07T10:47:58Z) - Brain Principles Programming [0.3867363075280543]
Brain Principles Programming, BPP, is the formalization of universal mechanisms (principles) of the brain's work with information.
The paper uses mathematical models and algorithms of the following theories.
arXiv Detail & Related papers (2022-02-13T13:41:44Z) - A Mathematical Walkthrough and Discussion of the Free Energy Principle [0.0]
The Free-Energy-Principle (FEP) is an influential and controversial theory which postulates a connection between the thermodynamics of self-organization and learning through variational inference.
FEP has been applied extensively in neuroscience, and is beginning to make inroads in machine learning by spurring the construction of novel and powerful algorithms by which action, perception, and learning can all be unified under a single objective.
Here, we aim to provide a mathematically detailed, yet intuitive walk-through of the formulation and central claims of the FEP while also providing a discussion of the assumptions necessary and potential limitations of the theory.
arXiv Detail & Related papers (2021-08-30T16:11:49Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.