Inference of collective Gaussian hidden Markov models
- URL: http://arxiv.org/abs/2107.11662v1
- Date: Sat, 24 Jul 2021 17:49:01 GMT
- Title: Inference of collective Gaussian hidden Markov models
- Authors: Rahul Singh, Yongxin Chen
- Abstract summary: We consider inference problems for a class of continuous state collective hidden Markov models.
We propose an aggregate inference algorithm called collective Gaussian forward-backward algorithm.
- Score: 8.348171150908724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider inference problems for a class of continuous state collective
hidden Markov models, where the data is recorded in aggregate (collective) form
generated by a large population of individuals following the same dynamics. We
propose an aggregate inference algorithm called collective Gaussian
forward-backward algorithm, extending recently proposed Sinkhorn belief
propagation algorithm to models characterized by Gaussian densities. Our
algorithm enjoys convergence guarantee. In addition, it reduces to the standard
Kalman filter when the observations are generated by a single individual. The
efficacy of the proposed algorithm is demonstrated through multiple
experiments.
Related papers
- Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Bregman Power k-Means for Clustering Exponential Family Data [11.434503492579477]
We bridge algorithmic advances to classical work on hard clustering under Bregman divergences.
The elegant properties of Bregman divergences allow us to maintain closed form updates in a simple and transparent algorithm.
We consider thorough empirical analyses on simulated experiments and a case study on rainfall data, finding that the proposed method outperforms existing peer methods in a variety of non-Gaussian data settings.
arXiv Detail & Related papers (2022-06-22T06:09:54Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Correlation Clustering Reconstruction in Semi-Adversarial Models [70.11015369368272]
Correlation Clustering is an important clustering problem with many applications.
We study the reconstruction version of this problem in which one is seeking to reconstruct a latent clustering corrupted by random noise and adversarial modifications.
arXiv Detail & Related papers (2021-08-10T14:46:17Z) - Direct Measure Matching for Crowd Counting [59.66286603624411]
We propose a new measure-based counting approach to regress the predicted density maps to the scattered point-annotated ground truth directly.
In this paper, we derive a semi-balanced form of Sinkhorn divergence, based on which a Sinkhorn counting loss is designed for measure matching.
arXiv Detail & Related papers (2021-07-04T06:37:33Z) - Learning Hidden Markov Models from Aggregate Observations [13.467017642143581]
We propose an algorithm for estimating the parameters of a time-homogeneous hidden Markov model from aggregate observations.
Our algorithm is built upon expectation-maximization and the recently proposed aggregate inference algorithm, the Sinkhorn belief propagation.
arXiv Detail & Related papers (2020-11-23T06:41:22Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Filtering for Aggregate Hidden Markov Models with Continuous
Observations [13.467017642143581]
We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM)
We propose an aggregate inference algorithm called continuous observation collective forward-backward algorithm.
arXiv Detail & Related papers (2020-11-04T20:05:36Z) - Incremental inference of collective graphical models [16.274397329511196]
In particular, we address the problem of estimating the aggregate marginals of a Markov chain from noisy aggregate observations.
We propose a sliding window Sinkhorn belief propagation (SW-SBP) algorithm that utilizes a sliding window filter of the most recent noisy aggregate observations.
arXiv Detail & Related papers (2020-06-26T15:04:31Z) - Inference with Aggregate Data: An Optimal Transport Approach [16.274397329511196]
We consider inference (filtering) problems over probabilistic graphical models with aggregate data generated by a large population of individuals.
We propose a new efficient belief propagation algorithm over tree-structured graphs with computational complexity as well as a global convergence guarantee.
arXiv Detail & Related papers (2020-03-31T03:12:20Z) - Generative Modeling with Denoising Auto-Encoders and Langevin Sampling [88.83704353627554]
We show that both DAE and DSM provide estimates of the score of the smoothed population density.
We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.
arXiv Detail & Related papers (2020-01-31T23:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.