Linear Noise Approximation Assisted Bayesian Inference on Mechanistic Model of Partially Observed Stochastic Reaction Network
- URL: http://arxiv.org/abs/2405.02783v2
- Date: Fri, 28 Jun 2024 23:30:36 GMT
- Title: Linear Noise Approximation Assisted Bayesian Inference on Mechanistic Model of Partially Observed Stochastic Reaction Network
- Authors: Wandi Xu, Wei Xie,
- Abstract summary: This paper develops an efficient Bayesian inference approach for partially observed enzymatic reaction network (SRN)
An interpretable linear noise approximation (LNA) metamodel is proposed to approximate the likelihood of observations.
An efficient posterior sampling approach is developed by utilizing the gradients of the derived likelihood to speed up the convergence of Markov Chain Monte Carlo.
- Score: 2.325005809983534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To support mechanism online learning and facilitate digital twin development for biomanufacturing processes, this paper develops an efficient Bayesian inference approach for partially observed enzymatic stochastic reaction network (SRN), a fundamental building block of multi-scale bioprocess mechanistic model. To tackle the critical challenges brought by the nonlinear stochastic differential equations (SDEs)-based mechanistic model with partially observed state and having measurement errors, an interpretable Bayesian updating linear noise approximation (LNA) metamodel, incorporating the structure information of the mechanistic model, is proposed to approximate the likelihood of observations. Then, an efficient posterior sampling approach is developed by utilizing the gradients of the derived likelihood to speed up the convergence of Markov Chain Monte Carlo (MCMC). The empirical study demonstrates that the proposed approach has a promising performance.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Free-Form Variational Inference for Gaussian Process State-Space Models [21.644570034208506]
We propose a new method for inference in Bayesian GPSSMs.
Our method is based on freeform variational inference via inducing Hamiltonian Monte Carlo.
We show that our approach can learn transition dynamics and latent states more accurately than competing methods.
arXiv Detail & Related papers (2023-02-20T11:34:16Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Multielement polynomial chaos Kriging-based metamodelling for Bayesian
inference of non-smooth systems [0.0]
This paper presents a surrogate modelling technique based on domain partitioning for Bayesian parameter inference of highly nonlinear engineering models.
The developed surrogate model combines in a piecewise function an array of local Polynomial Chaos based Kriging metamodels constructed on a finite set of non-overlapping of the input space.
The efficiency and accuracy of the proposed approach are validated through two case studies, including an analytical benchmark and a numerical case study.
arXiv Detail & Related papers (2022-12-05T13:22:39Z) - Counting Phases and Faces Using Bayesian Thermodynamic Integration [77.34726150561087]
We introduce a new approach to reconstruction of the thermodynamic functions and phase boundaries in two-parametric statistical mechanics systems.
We use the proposed approach to accurately reconstruct the partition functions and phase diagrams of the Ising model and the exactly solvable non-equilibrium TASEP.
arXiv Detail & Related papers (2022-05-18T17:11:23Z) - Dynamic Bayesian Network Auxiliary ABC-SMC for Hybrid Model Bayesian
Inference to Accelerate Biomanufacturing Process Mechanism Learning and
Robust Control [2.727760379582405]
We present a knowledge graph hybrid model characterizing complex causal interdependencies of underlying bioprocessing mechanisms.
It can faithfully capture the important properties, including nonlinear reactions, partially observed state, and nonstationary dynamics.
We derive a posterior distribution model uncertainty, which can facilitate mechanism learning and support robust process control.
arXiv Detail & Related papers (2022-05-05T02:54:21Z) - A Variational Approach to Bayesian Phylogenetic Inference [7.251627034538359]
We present a variational framework for Bayesian phylogenetic analysis.
We train the variational approximation via Markov gradient ascent and adopt estimators for continuous and discrete variational parameters.
Experiments on a benchmark of challenging real data phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods.
arXiv Detail & Related papers (2022-04-16T08:23:48Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Modeling Stochastic Microscopic Traffic Behaviors: a Physics Regularized
Gaussian Process Approach [1.6242924916178285]
This study presents a microscopic traffic model that can capture randomness and measure errors in the real world.
Since one unique feature of the proposed framework is the capability of capturing both car-following and lane-changing behaviors with one single model, numerical tests are carried out with two separated datasets.
arXiv Detail & Related papers (2020-07-17T06:03:32Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.