Improved Variational Bayesian Phylogenetic Inference using Mixtures
- URL: http://arxiv.org/abs/2310.00941v1
- Date: Mon, 2 Oct 2023 07:18:48 GMT
- Title: Improved Variational Bayesian Phylogenetic Inference using Mixtures
- Authors: Oskar Kviman, Ricky Mol\'en and Jens Lagergren
- Abstract summary: VBPI-Mixtures is an algorithm designed to enhance the accuracy of phylogenetic posterior distributions.
VBPI-Mixtures is capable of capturing distributions over tree-topologies that VBPI fails to model.
- Score: 4.551386476350572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present VBPI-Mixtures, an algorithm designed to enhance the accuracy of
phylogenetic posterior distributions, particularly for tree-topology and
branch-length approximations. Despite the Variational Bayesian Phylogenetic
Inference (VBPI), a leading-edge black-box variational inference (BBVI)
framework, achieving remarkable approximations of these distributions, the
multimodality of the tree-topology posterior presents a formidable challenge to
sampling-based learning techniques such as BBVI. Advanced deep learning
methodologies such as normalizing flows and graph neural networks have been
explored to refine the branch-length posterior approximation, yet efforts to
ameliorate the posterior approximation over tree topologies have been lacking.
Our novel VBPI-Mixtures algorithm bridges this gap by harnessing the latest
breakthroughs in mixture learning within the BBVI domain. As a result,
VBPI-Mixtures is capable of capturing distributions over tree-topologies that
VBPI fails to model. We deliver state-of-the-art performance on difficult
density estimation tasks across numerous real phylogenetic datasets.
Related papers
- Improving Tree Probability Estimation with Stochastic Optimization and Variance Reduction [11.417249588622926]
Subsplit Bayesian networks (SBNs) provide a powerful probabilistic graphical model for tree probability estimation.
The expectation (EM) method currently used for learning SBN parameters does not scale up to large data sets.
We introduce several computationally efficient methods for training SBNs and show that variance reduction could be the key for better performance.
arXiv Detail & Related papers (2024-09-09T02:22:52Z) - Variational Bayesian Phylogenetic Inference with Semi-implicit Branch Length Distributions [6.553961278427792]
We propose a more flexible family of branch length variational posteriors based on semi-implicit hierarchical distributions using graph neural networks.
We show that this construction emits straightforward permutation equivariant distributions, and therefore can handle the non-Euclidean branch length space across different tree topologies with ease.
arXiv Detail & Related papers (2024-08-09T13:29:08Z) - Fast, accurate and lightweight sequential simulation-based inference using Gaussian locally linear mappings [0.820217860574125]
We propose an alternative to "simulation-based inference" ( SBI) that provides both approximations to the likelihood and the posterior distribution.
Our approach produces accurate posterior inference when compared to state-of-the-art NN-based SBI methods, even for multimodal posteriors.
We illustrate our results on several benchmark models from the SBI literature and on a biological model of the translation kinetics after mRNA transfection.
arXiv Detail & Related papers (2024-03-12T09:48:17Z) - PhyloGFN: Phylogenetic inference with generative flow networks [57.104166650526416]
We introduce the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and phylogenetic inference.
Because GFlowNets are well-suited for sampling complex structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies.
We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets.
arXiv Detail & Related papers (2023-10-12T23:46:08Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Quasi Black-Box Variational Inference with Natural Gradients for
Bayesian Learning [84.90242084523565]
We develop an optimization algorithm suitable for Bayesian learning in complex models.
Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations.
arXiv Detail & Related papers (2022-05-23T18:54:27Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Improved Variational Bayesian Phylogenetic Inference with Normalizing
Flows [7.119831726757417]
We propose a new type of VBPI, VBPI-NF, as a first step to empower phylogenetic posterior estimation with deep learning techniques.
VBPI-NF uses normalizing flows to provide a rich family of flexible branch length distributions that generalize across different tree topologies.
arXiv Detail & Related papers (2020-12-01T13:10:00Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.