Filtering for Aggregate Hidden Markov Models with Continuous
Observations
- URL: http://arxiv.org/abs/2011.02521v2
- Date: Fri, 6 Nov 2020 04:35:35 GMT
- Title: Filtering for Aggregate Hidden Markov Models with Continuous
Observations
- Authors: Qinsheng Zhang, Rahul Singh, Yongxin Chen
- Abstract summary: We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM)
We propose an aggregate inference algorithm called continuous observation collective forward-backward algorithm.
- Score: 13.467017642143581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a class of filtering problems for large populations where each
individual is modeled by the same hidden Markov model (HMM). In this paper, we
focus on aggregate inference problems in HMMs with discrete state space and
continuous observation space. The continuous observations are aggregated in a
way such that the individuals are indistinguishable from measurements. We
propose an aggregate inference algorithm called continuous observation
collective forward-backward algorithm. It extends the recently proposed
collective forward-backward algorithm for aggregate inference in HMMs with
discrete observations to the case of continuous observations. The efficacy of
this algorithm is illustrated through several numerical experiments.
Related papers
- Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.
We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.
Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Estimating Latent Population Flows from Aggregated Data via Inversing
Multi-Marginal Optimal Transport [57.16851632525864]
We study the problem of estimating latent population flows from aggregated count data.
This problem arises when individual trajectories are not available due to privacy issues or measurement fidelity.
We propose to estimate the transition flows from aggregated data by learning the cost functions of the MOT framework.
arXiv Detail & Related papers (2022-12-30T03:03:23Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Forward-Backward Latent State Inference for Hidden Continuous-Time
semi-Markov Chains [28.275654187024376]
We show that non-sampling-based latent state inference used in HSMM's can be generalized to latent Continuous-Time semi-Markov Chains (CTSMC's)
We formulate integro-differential forward and backward equations adjusted to the observation likelihood and introduce an exact integral equation for the Bayesian posterior marginals.
We evaluate our approaches in latent state inference scenarios in comparison to classical HSMM's.
arXiv Detail & Related papers (2022-10-17T13:01:14Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Inference of collective Gaussian hidden Markov models [8.348171150908724]
We consider inference problems for a class of continuous state collective hidden Markov models.
We propose an aggregate inference algorithm called collective Gaussian forward-backward algorithm.
arXiv Detail & Related papers (2021-07-24T17:49:01Z) - Conjugate Mixture Models for Clustering Multimodal Data [24.640116037967985]
The problem of multimodal clustering arises whenever the data are gathered with several physically different sensors.
We show that multimodal clustering can be addressed within a novel framework, namely conjugate mixture models.
arXiv Detail & Related papers (2020-12-09T10:13:22Z) - Learning Hidden Markov Models from Aggregate Observations [13.467017642143581]
We propose an algorithm for estimating the parameters of a time-homogeneous hidden Markov model from aggregate observations.
Our algorithm is built upon expectation-maximization and the recently proposed aggregate inference algorithm, the Sinkhorn belief propagation.
arXiv Detail & Related papers (2020-11-23T06:41:22Z) - Incremental inference of collective graphical models [16.274397329511196]
In particular, we address the problem of estimating the aggregate marginals of a Markov chain from noisy aggregate observations.
We propose a sliding window Sinkhorn belief propagation (SW-SBP) algorithm that utilizes a sliding window filter of the most recent noisy aggregate observations.
arXiv Detail & Related papers (2020-06-26T15:04:31Z) - A Robust Functional EM Algorithm for Incomplete Panel Count Data [66.07942227228014]
We propose a functional EM algorithm to estimate the counting process mean function under a missing completely at random assumption (MCAR)
The proposed algorithm wraps several popular panel count inference methods, seamlessly deals with incomplete counts and is robust to misspecification of the Poisson process assumption.
We illustrate the utility of the proposed algorithm through numerical experiments and an analysis of smoking cessation data.
arXiv Detail & Related papers (2020-03-02T20:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.