Branching Time Active Inference with Bayesian Filtering
- URL: http://arxiv.org/abs/2112.07406v1
- Date: Tue, 14 Dec 2021 14:01:07 GMT
- Title: Branching Time Active Inference with Bayesian Filtering
- Authors: Th\'eophile Champion, Marek Grze\'s, Howard Bowman
- Abstract summary: Branching Time Active Inference (Champion et al., 2021b,a) is a framework proposing to look at planning as a form of Bayesian model expansion.
In this paper, we harness the efficiency of an alternative method for inference called Bayesian Filtering.
- Score: 3.5450828190071655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Branching Time Active Inference (Champion et al., 2021b,a) is a framework
proposing to look at planning as a form of Bayesian model expansion. Its root
can be found in Active Inference (Friston et al., 2016; Da Costa et al., 2020;
Champion et al., 2021c), a neuroscientific framework widely used for brain
modelling, as well as in Monte Carlo Tree Search (Browne et al., 2012), a
method broadly applied in the Reinforcement Learning literature. Up to now, the
inference of the latent variables was carried out by taking advantage of the
flexibility offered by Variational Message Passing (Winn and Bishop, 2005), an
iterative process that can be understood as sending messages along the edges of
a factor graph (Forney, 2001). In this paper, we harness the efficiency of an
alternative method for inference called Bayesian Filtering (Fox et al., 2003),
which does not require the iteration of the update equations until convergence
of the Variational Free Energy. Instead, this scheme alternates between two
phases: integration of evidence and prediction of future states. Both of those
phases can be performed efficiently and this provides a seventy times speed up
over the state-of-the-art.
Related papers
- Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review [59.856222854472605]
This tutorial provides an in-depth guide on inference-time guidance and alignment methods for optimizing downstream reward functions in diffusion models.
practical applications in fields such as biology often require sample generation that maximizes specific metrics.
We discuss (1) fine-tuning methods combined with inference-time techniques, (2) inference-time algorithms based on search algorithms such as Monte Carlo tree search, and (3) connections between inference-time algorithms in language models and diffusion models.
arXiv Detail & Related papers (2025-01-16T17:37:35Z) - ExDBN: Exact learning of Dynamic Bayesian Networks [2.2499166814992435]
We propose a score-based learning approach for causal learning from data.
We show that the proposed approach turns out to produce excellent results when applied to small and medium-sized synthetic instances of up to 25 time-series.
Two interesting applications in bio-science and finance, to which the method is directly applied, further stress the opportunities in developing highly accurate, globally convergent solvers.
arXiv Detail & Related papers (2024-10-21T15:27:18Z) - Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion [61.03681839276652]
Diffusion Forcing is a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels.
We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens.
arXiv Detail & Related papers (2024-07-01T15:43:25Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Learning a Diffusion Model Policy from Rewards via Q-Score Matching [93.0191910132874]
We present a theoretical framework linking the structure of diffusion model policies to a learned Q-function.
We propose a new policy update method from this theory, which we denote Q-score matching.
arXiv Detail & Related papers (2023-12-18T23:31:01Z) - Quasi Black-Box Variational Inference with Natural Gradients for
Bayesian Learning [84.90242084523565]
We develop an optimization algorithm suitable for Bayesian learning in complex models.
Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations.
arXiv Detail & Related papers (2022-05-23T18:54:27Z) - A Variational Approach to Bayesian Phylogenetic Inference [7.251627034538359]
We present a variational framework for Bayesian phylogenetic analysis.
We train the variational approximation via Markov gradient ascent and adopt estimators for continuous and discrete variational parameters.
Experiments on a benchmark of challenging real data phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods.
arXiv Detail & Related papers (2022-04-16T08:23:48Z) - Realising Active Inference in Variational Message Passing: the
Outcome-blind Certainty Seeker [3.5450828190071655]
This paper provides a complete mathematical treatment of the active inference framework -- in discrete time and state spaces.
We leverage the theoretical connection between active inference and variational message passing.
We show that using a fully factorized variational distribution simplifies the expected free energy.
arXiv Detail & Related papers (2021-04-23T19:40:55Z) - Variational inference with a quantum computer [0.0]
Inference is the task of drawing conclusions about unobserved variables given observations of related variables.
One alternative is variational inference, where a candidate probability distribution is optimized to approximate the posterior distribution over unobserved variables.
In this work, we propose quantum Born machines as variational distributions over discrete variables.
arXiv Detail & Related papers (2021-03-11T15:12:21Z) - The FMRIB Variational Bayesian Inference Tutorial II: Stochastic
Variational Bayes [1.827510863075184]
This tutorial revisits the original FMRIB Variational Bayes tutorial.
This new approach bears a lot of similarity to, and has benefited from, computational methods applied to machine learning algorithms.
arXiv Detail & Related papers (2020-07-03T11:31:52Z) - Lipreading using Temporal Convolutional Networks [57.41253104365274]
Current model for recognition of isolated words in-the-wild consists of a residual network and Bi-directional Gated Recurrent Unit layers.
We address the limitations of this model and we propose changes which further improve its performance.
Our proposed model results in an absolute improvement of 1.2% and 3.2%, respectively, in these datasets.
arXiv Detail & Related papers (2020-01-23T17:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.