Realising Active Inference in Variational Message Passing: the
Outcome-blind Certainty Seeker
- URL: http://arxiv.org/abs/2104.11798v1
- Date: Fri, 23 Apr 2021 19:40:55 GMT
- Title: Realising Active Inference in Variational Message Passing: the
Outcome-blind Certainty Seeker
- Authors: Th\'eophile Champion, Marek Grze\'s, Howard Bowman
- Abstract summary: This paper provides a complete mathematical treatment of the active inference framework -- in discrete time and state spaces.
We leverage the theoretical connection between active inference and variational message passing.
We show that using a fully factorized variational distribution simplifies the expected free energy.
- Score: 3.5450828190071655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Active inference is a state-of-the-art framework in neuroscience that offers
a unified theory of brain function. It is also proposed as a framework for
planning in AI. Unfortunately, the complex mathematics required to create new
models -- can impede application of active inference in neuroscience and AI
research. This paper addresses this problem by providing a complete
mathematical treatment of the active inference framework -- in discrete time
and state spaces -- and the derivation of the update equations for any new
model. We leverage the theoretical connection between active inference and
variational message passing as describe by John Winn and Christopher M. Bishop
in 2005. Since, variational message passing is a well-defined methodology for
deriving Bayesian belief update equations, this paper opens the door to
advanced generative models for active inference. We show that using a fully
factorized variational distribution simplifies the expected free energy -- that
furnishes priors over policies -- so that agents seek unambiguous states.
Finally, we consider future extensions that support deep tree searches for
sequential policy optimisation -- based upon structure learning and belief
propagation.
Related papers
- Demonstrating the Continual Learning Capabilities and Practical Application of Discrete-Time Active Inference [0.0]
Active inference is a mathematical framework for understanding how agents interact with their environments.
In this paper, we present a continual learning framework for agents operating in discrete time environments.
We demonstrate the agent's ability to relearn and refine its models efficiently, making it suitable for complex domains like finance and healthcare.
arXiv Detail & Related papers (2024-09-30T21:18:46Z) - Advancing Interactive Explainable AI via Belief Change Theory [5.842480645870251]
We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations.
We first define a novel, logic-based formalism to represent explanatory information shared between humans and machines.
We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated.
arXiv Detail & Related papers (2024-08-13T13:11:56Z) - Active Inference as a Model of Agency [1.9019250262578857]
We show that any behaviour complying with physically sound assumptions about how biological agents interact with the world integrates exploration and exploitation.
This description, known as active inference, refines the free energy principle, a popular descriptive framework for action and perception originating in neuroscience.
arXiv Detail & Related papers (2024-01-23T17:09:25Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Branching Time Active Inference with Bayesian Filtering [3.5450828190071655]
Branching Time Active Inference (Champion et al., 2021b,a) is a framework proposing to look at planning as a form of Bayesian model expansion.
In this paper, we harness the efficiency of an alternative method for inference called Bayesian Filtering.
arXiv Detail & Related papers (2021-12-14T14:01:07Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Deep active inference agents using Monte-Carlo methods [3.8233569758620054]
We present a neural architecture for building deep active inference agents in continuous state-spaces using Monte-Carlo sampling.
Our approach enables agents to learn environmental dynamics efficiently, while maintaining task performance.
Results show that deep active inference provides a flexible framework to develop biologically-inspired intelligent agents.
arXiv Detail & Related papers (2020-06-07T15:10:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.