Active Inference as a Model of Agency
- URL: http://arxiv.org/abs/2401.12917v1
- Date: Tue, 23 Jan 2024 17:09:25 GMT
- Title: Active Inference as a Model of Agency
- Authors: Lancelot Da Costa, Samuel Tenka, Dominic Zhao, Noor Sajid
- Abstract summary: We show that any behaviour complying with physically sound assumptions about how biological agents interact with the world integrates exploration and exploitation.
This description, known as active inference, refines the free energy principle, a popular descriptive framework for action and perception originating in neuroscience.
- Score: 1.9019250262578857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Is there a canonical way to think of agency beyond reward maximisation? In
this paper, we show that any type of behaviour complying with physically sound
assumptions about how macroscopic biological agents interact with the world
canonically integrates exploration and exploitation in the sense of minimising
risk and ambiguity about states of the world. This description, known as active
inference, refines the free energy principle, a popular descriptive framework
for action and perception originating in neuroscience. Active inference
provides a normative Bayesian framework to simulate and model agency that is
widely used in behavioural neuroscience, reinforcement learning (RL) and
robotics. The usefulness of active inference for RL is three-fold. \emph{a})
Active inference provides a principled solution to the exploration-exploitation
dilemma that usefully simulates biological agency. \emph{b}) It provides an
explainable recipe to simulate behaviour, whence behaviour follows as an
explainable mixture of exploration and exploitation under a generative world
model, and all differences in behaviour are explicit in differences in world
model. \emph{c}) This framework is universal in the sense that it is
theoretically possible to rewrite any RL algorithm conforming to the
descriptive assumptions of active inference as an active inference algorithm.
Thus, active inference can be used as a tool to uncover and compare the
commitments and assumptions of more specific models of agency.
Related papers
- Demonstrating the Continual Learning Capabilities and Practical Application of Discrete-Time Active Inference [0.0]
Active inference is a mathematical framework for understanding how agents interact with their environments.
In this paper, we present a continual learning framework for agents operating in discrete time environments.
We demonstrate the agent's ability to relearn and refine its models efficiently, making it suitable for complex domains like finance and healthcare.
arXiv Detail & Related papers (2024-09-30T21:18:46Z) - A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning [48.59516337905877]
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents.
Recent work has developed theoretical insights into these algorithms.
We take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective.
arXiv Detail & Related papers (2024-06-04T07:22:12Z) - Inference of Affordances and Active Motor Control in Simulated Agents [0.5161531917413706]
We introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture.
We show that our architecture develops latent states that can be interpreted as affordance maps.
In combination with active inference, we show that flexible, goal-directed behavior can be invoked.
arXiv Detail & Related papers (2022-02-23T14:13:04Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Feature-Based Interpretable Reinforcement Learning based on
State-Transition Models [3.883460584034766]
Growing concerns regarding the operational usage of AI models in the real-world has caused a surge of interest in explaining AI models' decisions to humans.
We propose a method for offering local explanations on risk in reinforcement learning.
arXiv Detail & Related papers (2021-05-14T23:43:11Z) - What is Going on Inside Recurrent Meta Reinforcement Learning Agents? [63.58053355357644]
Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of "learning a learning algorithm"
We shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework.
arXiv Detail & Related papers (2021-04-29T20:34:39Z) - Realising Active Inference in Variational Message Passing: the
Outcome-blind Certainty Seeker [3.5450828190071655]
This paper provides a complete mathematical treatment of the active inference framework -- in discrete time and state spaces.
We leverage the theoretical connection between active inference and variational message passing.
We show that using a fully factorized variational distribution simplifies the expected free energy.
arXiv Detail & Related papers (2021-04-23T19:40:55Z) - Causal blankets: Theory and algorithmic framework [59.43413767524033]
We introduce a novel framework to identify perception-action loops (PALOs) directly from data based on the principles of computational mechanics.
Our approach is based on the notion of causal blanket, which captures sensory and active variables as dynamical sufficient statistics.
arXiv Detail & Related papers (2020-08-28T10:26:17Z) - On the Relationship Between Active Inference and Control as Inference [62.997667081978825]
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem.
arXiv Detail & Related papers (2020-06-23T13:03:58Z) - Reinforcement Learning through Active Inference [62.997667081978825]
We show how ideas from active inference can augment traditional reinforcement learning approaches.
We develop and implement a novel objective for decision making, which we term the free energy of the expected future.
We demonstrate that the resulting algorithm successfully exploration and exploitation, simultaneously achieving robust performance on several challenging RL benchmarks with sparse, well-shaped, and no rewards.
arXiv Detail & Related papers (2020-02-28T10:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.