A Neural Network Implementation for Free Energy Principle
- URL: http://arxiv.org/abs/2306.06792v1
- Date: Sun, 11 Jun 2023 22:14:21 GMT
- Title: A Neural Network Implementation for Free Energy Principle
- Authors: Jingwei Liu
- Abstract summary: The free energy principle (FEP) has been widely applied to account for various problems in fields such as cognitive science, neuroscience, social interaction, and hermeneutics.
This paper gives a preliminary attempt at bridging FEP and machine learning, via a classical neural network model, the Helmholtz machine.
Although the Helmholtz machine is not temporal, it gives an ideal parallel to the vanilla FEP and the hierarchical model of the brain, under which the active inference and predictive coding could be formulated coherently.
- Score: 3.7937771805690392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The free energy principle (FEP), as an encompassing framework and a unified
brain theory, has been widely applied to account for various problems in fields
such as cognitive science, neuroscience, social interaction, and hermeneutics.
As a computational model deeply rooted in math and statistics, FEP posits an
optimization problem based on variational Bayes, which is solved either by
dynamic programming or expectation maximization in practice. However, there
seems to be a bottleneck in extending the FEP to machine learning and
implementing such models with neural networks. This paper gives a preliminary
attempt at bridging FEP and machine learning, via a classical neural network
model, the Helmholtz machine. As a variational machine learning model, the
Helmholtz machine is optimized by minimizing its free energy, the same
objective as FEP. Although the Helmholtz machine is not temporal, it gives an
ideal parallel to the vanilla FEP and the hierarchical model of the brain,
under which the active inference and predictive coding could be formulated
coherently. Besides a detailed theoretical discussion, the paper also presents
a preliminary experiment to validate the hypothesis. By fine-tuning the trained
neural network through active inference, the model performance is promoted to
accuracy above 99\%. In the meantime, the data distribution is continuously
deformed to a salience that conforms to the model representation, as a result
of active sampling.
Related papers
- A prescriptive theory for brain-like inference [0.0]
We show that maximizing the Evidence Lower Bound (ELBO) leads to a spiking neural network that performs Bayesian posterior inference.
The resulting model, the iterative Poisson VAE, has a closer connection to biological neurons than previous brain-inspired predictive models.
These findings suggest that optimizing ELBO, combined with Poisson assumptions, provides a solid foundation for developing prescriptive theories in NeuroAI.
arXiv Detail & Related papers (2024-10-25T06:00:18Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Neural net modeling of equilibria in NSTX-U [0.0]
We develop two neural networks relevant to equilibrium and shape control modeling.
Networks include Eqnet, a free-boundary equilibrium solver trained on the EFIT01 reconstruction algorithm, and Pertnet, which is trained on the Gspert code.
We report strong performance for both networks indicating that these models could reliably be used within closed-loop simulations.
arXiv Detail & Related papers (2022-02-28T16:09:58Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - On Energy-Based Models with Overparametrized Shallow Neural Networks [44.74000986284978]
Energy-based models (EBMs) are a powerful framework for generative modeling.
In this work we focus on shallow neural networks.
We show that models trained in the so-called "active" regime provide a statistical advantage over their associated "lazy" or kernel regime.
arXiv Detail & Related papers (2021-04-15T15:34:58Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Sobolev training of thermodynamic-informed neural networks for smoothed
elasto-plasticity models with level set hardening [0.0]
We introduce a deep learning framework designed to train smoothed elastoplasticity models with interpretable components.
By recasting the yield function as an evolving level set, we introduce a machine learning approach to predict the solutions of the Hamilton-Jacobi equation.
arXiv Detail & Related papers (2020-10-15T22:43:32Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.