Incremental inference of collective graphical models
- URL: http://arxiv.org/abs/2006.15035v1
- Date: Fri, 26 Jun 2020 15:04:31 GMT
- Title: Incremental inference of collective graphical models
- Authors: Rahul Singh, Isabel Haasler, Qinsheng Zhang, Johan Karlsson, Yongxin
Chen
- Abstract summary: In particular, we address the problem of estimating the aggregate marginals of a Markov chain from noisy aggregate observations.
We propose a sliding window Sinkhorn belief propagation (SW-SBP) algorithm that utilizes a sliding window filter of the most recent noisy aggregate observations.
- Score: 16.274397329511196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider incremental inference problems from aggregate data for collective
dynamics. In particular, we address the problem of estimating the aggregate
marginals of a Markov chain from noisy aggregate observations in an incremental
(online) fashion. We propose a sliding window Sinkhorn belief propagation
(SW-SBP) algorithm that utilizes a sliding window filter of the most recent
noisy aggregate observations along with encoded information from discarded
observations. Our algorithm is built upon the recently proposed multi-marginal
optimal transport based SBP algorithm that leverages standard belief
propagation and Sinkhorn algorithm to solve inference problems from aggregate
data. We demonstrate the performance of our algorithm on applications such as
inferring population flow from aggregate observations.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Optimal Clustering with Bandit Feedback [57.672609011609886]
This paper considers the problem of online clustering with bandit feedback.
It includes a novel stopping rule for sequential testing that circumvents the need to solve any NP-hard weighted clustering problem as its subroutines.
We show through extensive simulations on synthetic and real-world datasets that BOC's performance matches the lower boundally, and significantly outperforms a non-adaptive baseline algorithm.
arXiv Detail & Related papers (2022-02-09T06:05:05Z) - Correlation Clustering Reconstruction in Semi-Adversarial Models [70.11015369368272]
Correlation Clustering is an important clustering problem with many applications.
We study the reconstruction version of this problem in which one is seeking to reconstruct a latent clustering corrupted by random noise and adversarial modifications.
arXiv Detail & Related papers (2021-08-10T14:46:17Z) - Inference of collective Gaussian hidden Markov models [8.348171150908724]
We consider inference problems for a class of continuous state collective hidden Markov models.
We propose an aggregate inference algorithm called collective Gaussian forward-backward algorithm.
arXiv Detail & Related papers (2021-07-24T17:49:01Z) - SNIPS: Solving Noisy Inverse Problems Stochastically [25.567566997688044]
We introduce a novel algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem.
Our solution incorporates ideas from Langevin dynamics and Newton's method, and exploits a pre-trained minimum mean squared error (MMSE)
We show that the samples produced are sharp, detailed and consistent with the given measurements, and their diversity exposes the inherent uncertainty in the inverse problem being solved.
arXiv Detail & Related papers (2021-05-31T13:33:21Z) - Learning Hidden Markov Models from Aggregate Observations [13.467017642143581]
We propose an algorithm for estimating the parameters of a time-homogeneous hidden Markov model from aggregate observations.
Our algorithm is built upon expectation-maximization and the recently proposed aggregate inference algorithm, the Sinkhorn belief propagation.
arXiv Detail & Related papers (2020-11-23T06:41:22Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Filtering for Aggregate Hidden Markov Models with Continuous
Observations [13.467017642143581]
We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM)
We propose an aggregate inference algorithm called continuous observation collective forward-backward algorithm.
arXiv Detail & Related papers (2020-11-04T20:05:36Z) - Inference with Aggregate Data: An Optimal Transport Approach [16.274397329511196]
We consider inference (filtering) problems over probabilistic graphical models with aggregate data generated by a large population of individuals.
We propose a new efficient belief propagation algorithm over tree-structured graphs with computational complexity as well as a global convergence guarantee.
arXiv Detail & Related papers (2020-03-31T03:12:20Z) - Generative Modeling with Denoising Auto-Encoders and Langevin Sampling [88.83704353627554]
We show that both DAE and DSM provide estimates of the score of the smoothed population density.
We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.
arXiv Detail & Related papers (2020-01-31T23:50:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.