Forward-Backward Latent State Inference for Hidden Continuous-Time
semi-Markov Chains
- URL: http://arxiv.org/abs/2210.09058v1
- Date: Mon, 17 Oct 2022 13:01:14 GMT
- Title: Forward-Backward Latent State Inference for Hidden Continuous-Time
semi-Markov Chains
- Authors: Nicolai Engelmann, Heinz Koeppl
- Abstract summary: We show that non-sampling-based latent state inference used in HSMM's can be generalized to latent Continuous-Time semi-Markov Chains (CTSMC's)
We formulate integro-differential forward and backward equations adjusted to the observation likelihood and introduce an exact integral equation for the Bayesian posterior marginals.
We evaluate our approaches in latent state inference scenarios in comparison to classical HSMM's.
- Score: 28.275654187024376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hidden semi-Markov Models (HSMM's) - while broadly in use - are restricted to
a discrete and uniform time grid. They are thus not well suited to explain
often irregularly spaced discrete event data from continuous-time phenomena. We
show that non-sampling-based latent state inference used in HSMM's can be
generalized to latent Continuous-Time semi-Markov Chains (CTSMC's). We
formulate integro-differential forward and backward equations adjusted to the
observation likelihood and introduce an exact integral equation for the
Bayesian posterior marginals and a scalable Viterbi-type algorithm for
posterior path estimates. The presented equations can be efficiently solved
using well-known numerical methods. As a practical tool, variable-step HSMM's
are introduced. We evaluate our approaches in latent state inference scenarios
in comparison to classical HSMM's.
Related papers
- Convergence Conditions of Online Regularized Statistical Learning in Reproducing Kernel Hilbert Space With Non-Stationary Data [4.5692679976952215]
We study the convergence of regularized learning algorithms in the reproducing kernel HilbertRKHS with dependent and non-stationary online data streams.
For independent and non-identically distributed data streams, the algorithm achieves the mean square consistency.
arXiv Detail & Related papers (2024-04-04T05:35:59Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models [70.26374282390401]
Decoding the original signal (i.e., hidden chain) from the noisy observations is one of the main goals in nearly all HMM based data analyses.
We present Quick Adaptive Ternary (QATS), a divide-and-conquer procedure which decodes the hidden sequence in polylogarithmic computational complexity.
arXiv Detail & Related papers (2023-05-29T19:37:48Z) - Estimating Latent Population Flows from Aggregated Data via Inversing
Multi-Marginal Optimal Transport [57.16851632525864]
We study the problem of estimating latent population flows from aggregated count data.
This problem arises when individual trajectories are not available due to privacy issues or measurement fidelity.
We propose to estimate the transition flows from aggregated data by learning the cost functions of the MOT framework.
arXiv Detail & Related papers (2022-12-30T03:03:23Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Markov Chain Monte Carlo for Continuous-Time Switching Dynamical Systems [26.744964200606784]
We propose a novel inference algorithm utilizing a Markov Chain Monte Carlo approach.
The presented Gibbs sampler allows to efficiently obtain samples from the exact continuous-time posterior processes.
arXiv Detail & Related papers (2022-05-18T09:03:00Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Online Time Series Anomaly Detection with State Space Gaussian Processes [12.483273106706623]
R-ssGPFA is an unsupervised online anomaly detection model for uni- and multivariate time series.
For high-dimensional time series, we propose an extension of Gaussian process factor analysis to identify the common latent processes of the time series.
Our model's robustness is improved by using a simple to skip Kalman updates when encountering anomalous observations.
arXiv Detail & Related papers (2022-01-18T06:43:32Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Filtering for Aggregate Hidden Markov Models with Continuous
Observations [13.467017642143581]
We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM)
We propose an aggregate inference algorithm called continuous observation collective forward-backward algorithm.
arXiv Detail & Related papers (2020-11-04T20:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.