MM Algorithms to Estimate Parameters in Continuous-time Markov Chains
- URL: http://arxiv.org/abs/2302.08588v1
- Date: Thu, 16 Feb 2023 21:25:27 GMT
- Title: MM Algorithms to Estimate Parameters in Continuous-time Markov Chains
- Authors: Giovanni Bacci, Anna Ing\'olfsd\'ottir, Kim G. Larsen, Rapha\"el
Reynouard
- Abstract summary: We introduce the class parametric CTMCs, where transition rates are functions over a set of parameters.
We present iterative likelihood estimation algorithms for parametric CTMCs covering two learning scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continuous-time Markov chains (CTMCs) are popular modeling formalism that
constitutes the underlying semantics for real-time probabilistic systems such
as queuing networks, stochastic process algebras, and calculi for systems
biology. Prism and Storm are popular model checking tools that provide a number
of powerful analysis techniques for CTMCs. These tools accept models expressed
as the parallel composition of a number of modules interacting with each other.
The outcome of the analysis is strongly dependent on the parameter values used
in the model which govern the timing and probability of events of the resulting
CTMC. However, for some applications, parameter values have to be empirically
estimated from partially-observable executions. In this work, we address the
problem of estimating parameter values of CTMCs expressed as Prism models from
a number of partially-observable executions. We introduce the class parametric
CTMCs -- CTMCs where transition rates are polynomial functions over a set of
parameters -- as an abstraction of CTMCs covering a large class of Prism
models. Then, building on a theory of algorithms known by the initials MM, for
minorization-maximization, we present iterative maximum likelihood estimation
algorithms for parametric CTMCs covering two learning scenarios: when both
state-labels and dwell times are observable, or just state-labels are. We
conclude by illustrating the use of our technique in a simple but non-trivial
case study: the analysis of the spread of COVID-19 in presence of lockdown
countermeasures.
Related papers
- Recursive Learning of Asymptotic Variational Objectives [49.69399307452126]
General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.
Online sequential IWAE (OSIWAE) allows for online learning of both model parameters and a Markovian recognition model for inferring latent states.
This approach is more theoretically well-founded than recently proposed online variational SMC methods.
arXiv Detail & Related papers (2024-11-04T16:12:37Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Bayesian tomography using polynomial chaos expansion and deep generative
networks [0.0]
We present a strategy combining the excellent reconstruction performances of a variational autoencoder (VAE) with the accuracy of PCA-PCE surrogate modeling.
Within the MCMC process, the parametrization of the VAE is leveraged for prior exploration and sample proposals.
arXiv Detail & Related papers (2023-07-09T16:44:37Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Cyclical Variational Bayes Monte Carlo for Efficient Multi-Modal
Posterior Distributions Evaluation [0.0]
Variational inference is an alternative approach to sampling methods to estimate posterior approximations.
The Variational Bayesian Monte Carlo (VBMC) method is investigated with the purpose of dealing with statistical model updating problems.
arXiv Detail & Related papers (2022-02-23T17:31:42Z) - Contrastive predictive coding for Anomaly Detection in Multi-variate
Time Series Data [6.463941665276371]
We propose a Time-series Representational Learning through Contrastive Predictive Coding (TRL-CPC) towards anomaly detection in MVTS data.
First, we jointly optimize an encoder, an auto-regressor and a non-linear transformation function to effectively learn the representations of the MVTS data sets.
arXiv Detail & Related papers (2022-02-08T04:25:29Z) - Efficient Learning and Decoding of the Continuous-Time Hidden Markov
Model for Disease Progression Modeling [119.50438407358862]
We present the first complete characterization of efficient EM-based learning methods for CT-HMM models.
We show that EM-based learning consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics.
We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer's disease dataset.
arXiv Detail & Related papers (2021-10-26T20:06:05Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Markov-Chain Monte Carlo Approximation of the Ideal Observer using
Generative Adversarial Networks [14.792685152780795]
The Ideal Observer (IO) performance has been advocated when optimizing medical imaging systems for signal detection tasks.
To approximate the IO test statistic, sampling-based methods that employ Markov-Chain Monte Carlo (MCMC) techniques have been developed.
Deep learning methods that employ generative adversarial networks (GANs) hold great promise to learn object models from image data.
arXiv Detail & Related papers (2020-01-26T21:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.