The Free Energy Principle for Perception and Action: A Deep Learning
Perspective
- URL: http://arxiv.org/abs/2207.06415v1
- Date: Wed, 13 Jul 2022 11:07:03 GMT
- Title: The Free Energy Principle for Perception and Action: A Deep Learning
Perspective
- Authors: Pietro Mazzaglia, Tim Verbelen, Ozan \c{C}atal, Bart Dhoedt
- Abstract summary: The free energy principle, and its corollary active inference, constitute a bio-inspired theory that assumes biological agents act to remain in a restricted set of preferred states of the world.
Under this principle, biological agents learn a generative model of the world and plan actions in the future that will maintain the agent in a homeostatic state that satisfies its preferences.
This manuscript probes newer perspectives for the active inference framework, grounding its theoretical aspects into more pragmatic affairs.
- Score: 4.6956495676681484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The free energy principle, and its corollary active inference, constitute a
bio-inspired theory that assumes biological agents act to remain in a
restricted set of preferred states of the world, i.e., they minimize their free
energy. Under this principle, biological agents learn a generative model of the
world and plan actions in the future that will maintain the agent in an
homeostatic state that satisfies its preferences. This framework lends itself
to being realized in silico, as it comprehends important aspects that make it
computationally affordable, such as variational inference and amortized
planning. In this work, we investigate the tool of deep learning to design and
realize artificial agents based on active inference, presenting a deep-learning
oriented presentation of the free energy principle, surveying works that are
relevant in both machine learning and active inference areas, and discussing
the design choices that are involved in the implementation process. This
manuscript probes newer perspectives for the active inference framework,
grounding its theoretical aspects into more pragmatic affairs, offering a
practical guide to active inference newcomers and a starting point for deep
learning practitioners that would like to investigate implementations of the
free energy principle.
Related papers
- Demonstrating the Continual Learning Capabilities and Practical Application of Discrete-Time Active Inference [0.0]
Active inference is a mathematical framework for understanding how agents interact with their environments.
In this paper, we present a continual learning framework for agents operating in discrete time environments.
We demonstrate the agent's ability to relearn and refine its models efficiently, making it suitable for complex domains like finance and healthcare.
arXiv Detail & Related papers (2024-09-30T21:18:46Z) - Dynamic planning in hierarchical active inference [0.0]
We refer to the ability of the human brain to infer and impose motor trajectories related to cognitive decisions.
This study focuses on the topic of dynamic planning in active inference.
arXiv Detail & Related papers (2024-02-18T17:32:53Z) - ConcEPT: Concept-Enhanced Pre-Training for Language Models [57.778895980999124]
ConcEPT aims to infuse conceptual knowledge into pre-trained language models.
It exploits external entity concept prediction to predict the concepts of entities mentioned in the pre-training contexts.
Results of experiments show that ConcEPT gains improved conceptual knowledge with concept-enhanced pre-training.
arXiv Detail & Related papers (2024-01-11T05:05:01Z) - Towards a General Framework for Continual Learning with Pre-training [55.88910947643436]
We present a general framework for continual learning of sequentially arrived tasks with the use of pre-training.
We decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction.
We propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics.
arXiv Detail & Related papers (2023-10-21T02:03:38Z) - Bayesian Reinforcement Learning with Limited Cognitive Load [43.19983737333797]
Theory of adaptive behavior should account for complex interactions between an agent's learning history, decisions, and capacity constraints.
Recent work in computer science has begun to clarify the principles that shape these dynamics by bridging ideas from reinforcement learning, Bayesian decision-making, and rate-distortion theory.
arXiv Detail & Related papers (2023-05-05T03:29:34Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - A Neural Active Inference Model of Perceptual-Motor Learning [62.39667564455059]
The active inference framework (AIF) is a promising new computational framework grounded in contemporary neuroscience.
In this study, we test the ability for the AIF to capture the role of anticipation in the visual guidance of action in humans.
We present a novel formulation of the prior function that maps a multi-dimensional world-state to a uni-dimensional distribution of free-energy.
arXiv Detail & Related papers (2022-11-16T20:00:38Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Prior Preference Learning from Experts:Designing a Reward with Active
Inference [1.1602089225841632]
We claim that active inference can be interpreted using reinforcement learning (RL) algorithms.
Motivated by the concept of prior preference and a theoretical connection, we propose a simple but novel method for learning a prior preference from experts.
arXiv Detail & Related papers (2021-01-22T04:03:45Z) - Reinforcement Learning through Active Inference [62.997667081978825]
We show how ideas from active inference can augment traditional reinforcement learning approaches.
We develop and implement a novel objective for decision making, which we term the free energy of the expected future.
We demonstrate that the resulting algorithm successfully exploration and exploitation, simultaneously achieving robust performance on several challenging RL benchmarks with sparse, well-shaped, and no rewards.
arXiv Detail & Related papers (2020-02-28T10:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.