Branching Time Active Inference with Bayesian Filtering
- URL: http://arxiv.org/abs/2112.07406v1
- Date: Tue, 14 Dec 2021 14:01:07 GMT
- Title: Branching Time Active Inference with Bayesian Filtering
- Authors: Th\'eophile Champion, Marek Grze\'s, Howard Bowman
- Abstract summary: Branching Time Active Inference (Champion et al., 2021b,a) is a framework proposing to look at planning as a form of Bayesian model expansion.
In this paper, we harness the efficiency of an alternative method for inference called Bayesian Filtering.
- Score: 3.5450828190071655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Branching Time Active Inference (Champion et al., 2021b,a) is a framework
proposing to look at planning as a form of Bayesian model expansion. Its root
can be found in Active Inference (Friston et al., 2016; Da Costa et al., 2020;
Champion et al., 2021c), a neuroscientific framework widely used for brain
modelling, as well as in Monte Carlo Tree Search (Browne et al., 2012), a
method broadly applied in the Reinforcement Learning literature. Up to now, the
inference of the latent variables was carried out by taking advantage of the
flexibility offered by Variational Message Passing (Winn and Bishop, 2005), an
iterative process that can be understood as sending messages along the edges of
a factor graph (Forney, 2001). In this paper, we harness the efficiency of an
alternative method for inference called Bayesian Filtering (Fox et al., 2003),
which does not require the iteration of the update equations until convergence
of the Variational Free Energy. Instead, this scheme alternates between two
phases: integration of evidence and prediction of future states. Both of those
phases can be performed efficiently and this provides a seventy times speed up
over the state-of-the-art.
Related papers
- ExDBN: Exact learning of Dynamic Bayesian Networks [2.2499166814992435]
We propose a score-based learning approach for causal learning from data.
We show that the proposed approach turns out to produce excellent results when applied to small and medium-sized synthetic instances of up to 25 time-series.
Two interesting applications in bio-science and finance, to which the method is directly applied, further stress the opportunities in developing highly accurate, globally convergent solvers.
arXiv Detail & Related papers (2024-10-21T15:27:18Z) - Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness [0.3749861135832073]
We use a counterfactual approach, referred to as sequential transport, to discuss individual fairness.
We demonstrate its application through numerical experiments on both synthetic and real datasets.
arXiv Detail & Related papers (2024-08-06T20:02:57Z) - Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion [61.03681839276652]
Diffusion Forcing is a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels.
We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens.
arXiv Detail & Related papers (2024-07-01T15:43:25Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Learning a Diffusion Model Policy from Rewards via Q-Score Matching [93.0191910132874]
We present a theoretical framework linking the structure of diffusion model policies to a learned Q-function.
We propose a new policy update method from this theory, which we denote Q-score matching.
arXiv Detail & Related papers (2023-12-18T23:31:01Z) - Quasi Black-Box Variational Inference with Natural Gradients for
Bayesian Learning [84.90242084523565]
We develop an optimization algorithm suitable for Bayesian learning in complex models.
Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations.
arXiv Detail & Related papers (2022-05-23T18:54:27Z) - A Variational Approach to Bayesian Phylogenetic Inference [7.251627034538359]
We present a variational framework for Bayesian phylogenetic analysis.
We train the variational approximation via Markov gradient ascent and adopt estimators for continuous and discrete variational parameters.
Experiments on a benchmark of challenging real data phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods.
arXiv Detail & Related papers (2022-04-16T08:23:48Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Realising Active Inference in Variational Message Passing: the
Outcome-blind Certainty Seeker [3.5450828190071655]
This paper provides a complete mathematical treatment of the active inference framework -- in discrete time and state spaces.
We leverage the theoretical connection between active inference and variational message passing.
We show that using a fully factorized variational distribution simplifies the expected free energy.
arXiv Detail & Related papers (2021-04-23T19:40:55Z) - The FMRIB Variational Bayesian Inference Tutorial II: Stochastic
Variational Bayes [1.827510863075184]
This tutorial revisits the original FMRIB Variational Bayes tutorial.
This new approach bears a lot of similarity to, and has benefited from, computational methods applied to machine learning algorithms.
arXiv Detail & Related papers (2020-07-03T11:31:52Z) - Lipreading using Temporal Convolutional Networks [57.41253104365274]
Current model for recognition of isolated words in-the-wild consists of a residual network and Bi-directional Gated Recurrent Unit layers.
We address the limitations of this model and we propose changes which further improve its performance.
Our proposed model results in an absolute improvement of 1.2% and 3.2%, respectively, in these datasets.
arXiv Detail & Related papers (2020-01-23T17:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.