Learning Hidden Markov Models from Aggregate Observations
- URL: http://arxiv.org/abs/2011.11236v2
- Date: Sun, 14 Nov 2021 18:09:57 GMT
- Title: Learning Hidden Markov Models from Aggregate Observations
- Authors: Rahul Singh, Qinsheng Zhang, Yongxin Chen
- Abstract summary: We propose an algorithm for estimating the parameters of a time-homogeneous hidden Markov model from aggregate observations.
Our algorithm is built upon expectation-maximization and the recently proposed aggregate inference algorithm, the Sinkhorn belief propagation.
- Score: 13.467017642143581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an algorithm for estimating the parameters of a
time-homogeneous hidden Markov model from aggregate observations. This problem
arises when only the population level counts of the number of individuals at
each time step are available, from which one seeks to learn the individual
hidden Markov model. Our algorithm is built upon expectation-maximization and
the recently proposed aggregate inference algorithm, the Sinkhorn belief
propagation. As compared with existing methods such as expectation-maximization
with non-linear belief propagation, our algorithm exhibits convergence
guarantees. Moreover, our learning framework naturally reduces to the standard
Baum-Welch learning algorithm when observations corresponding to a single
individual are recorded. We further extend our learning algorithm to handle
HMMs with continuous observations. The efficacy of our algorithm is
demonstrated on a variety of datasets.
Related papers
- Making Old Things New: A Unified Algorithm for Differentially Private Clustering [6.320600791108065]
We show that a 20-year-old algorithm can be slightly modified to work for any of these models.
This provides a unified picture: while matching almost all previously known results, it allows us to improve some of them.
arXiv Detail & Related papers (2024-06-17T15:31:53Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Learning Hidden Markov Models Using Conditional Samples [72.20944611510198]
This paper is concerned with the computational complexity of learning the Hidden Markov Model (HMM)
In this paper, we consider an interactive access model, in which the algorithm can query for samples from the conditional distributions of the HMMs.
Specifically, we obtain efficient algorithms for learning HMMs in settings where we have query access to the exact conditional probabilities.
arXiv Detail & Related papers (2023-02-28T16:53:41Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Geometric Learning of Hidden Markov Models via a Method of Moments
Algorithm [11.338112397748619]
We present a novel algorithm for learning the parameters of hidden Markov models (HMMs) in a geometric setting.
We demonstrate through examples that the learner can result in significantly improved speed and numerical accuracy compared to existing learners.
arXiv Detail & Related papers (2022-07-02T12:24:38Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Inference of collective Gaussian hidden Markov models [8.348171150908724]
We consider inference problems for a class of continuous state collective hidden Markov models.
We propose an aggregate inference algorithm called collective Gaussian forward-backward algorithm.
arXiv Detail & Related papers (2021-07-24T17:49:01Z) - Filtering for Aggregate Hidden Markov Models with Continuous
Observations [13.467017642143581]
We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM)
We propose an aggregate inference algorithm called continuous observation collective forward-backward algorithm.
arXiv Detail & Related papers (2020-11-04T20:05:36Z) - Incremental inference of collective graphical models [16.274397329511196]
In particular, we address the problem of estimating the aggregate marginals of a Markov chain from noisy aggregate observations.
We propose a sliding window Sinkhorn belief propagation (SW-SBP) algorithm that utilizes a sliding window filter of the most recent noisy aggregate observations.
arXiv Detail & Related papers (2020-06-26T15:04:31Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.