Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models
- URL: http://arxiv.org/abs/2305.18578v1
- Date: Mon, 29 May 2023 19:37:48 GMT
- Title: Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models
- Authors: Alexandre M\"osching, Housen Li, Axel Munk
- Abstract summary: Decoding the original signal (i.e., hidden chain) from the noisy observations is one of the main goals in nearly all HMM based data analyses.
We present Quick Adaptive Ternary (QATS), a divide-and-conquer procedure which decodes the hidden sequence in polylogarithmic computational complexity.
- Score: 70.26374282390401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hidden Markov models (HMMs) are characterized by an unobservable (hidden)
Markov chain and an observable process, which is a noisy version of the hidden
chain. Decoding the original signal (i.e., hidden chain) from the noisy
observations is one of the main goals in nearly all HMM based data analyses.
Existing decoding algorithms such as the Viterbi algorithm have computational
complexity at best linear in the length of the observed sequence, and
sub-quadratic in the size of the state space of the Markov chain. We present
Quick Adaptive Ternary Segmentation (QATS), a divide-and-conquer procedure
which decodes the hidden sequence in polylogarithmic computational complexity
in the length of the sequence, and cubic in the size of the state space, hence
particularly suited for large scale HMMs with relatively few states. The
procedure also suggests an effective way of data storage as specific cumulative
sums. In essence, the estimated sequence of states sequentially maximizes local
likelihood scores among all local paths with at most three segments. The
maximization is performed only approximately using an adaptive search
procedure. The resulting sequence is admissible in the sense that all
transitions occur with positive probability. To complement formal results
justifying our approach, we present Monte-Carlo simulations which demonstrate
the speedups provided by QATS in comparison to Viterbi, along with a precision
analysis of the returned sequences. An implementation of QATS in C++ is
provided in the R-package QATS and is available from GitHub.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Learning Hidden Markov Models Using Conditional Samples [72.20944611510198]
This paper is concerned with the computational complexity of learning the Hidden Markov Model (HMM)
In this paper, we consider an interactive access model, in which the algorithm can query for samples from the conditional distributions of the HMMs.
Specifically, we obtain efficient algorithms for learning HMMs in settings where we have query access to the exact conditional probabilities.
arXiv Detail & Related papers (2023-02-28T16:53:41Z) - Efficiency Ordering of Stochastic Gradient Descent [9.634481296779057]
We consider the gradient descent (SGD) algorithm driven by a general sampling sequence, including i.i.i.d noise and random walk on an arbitrary graph.
We employ the notion of efficiency ordering', a well-analyzed tool for comparing the performance of Markov Chain Monte Carlo samplers.
arXiv Detail & Related papers (2022-09-15T16:50:55Z) - MrSQM: Fast Time Series Classification with Symbolic Representations [11.853438514668207]
MrSQM uses multiple symbolic representations and efficient sequence mining to extract important time series features.
We study four feature selection approaches on symbolic sequences, ranging from fully supervised, to unsupervised and hybrids.
Our experiments on 112 datasets of the UEA/UCR benchmark demonstrate that MrSQM can quickly extract useful features.
arXiv Detail & Related papers (2021-09-02T15:54:46Z) - Composably secure data processing for Gaussian-modulated continuous
variable quantum key distribution [58.720142291102135]
Continuous-variable quantum key distribution (QKD) employs the quadratures of a bosonic mode to establish a secret key between two remote parties.
We consider a protocol with homodyne detection in the general setting of composable finite-size security.
In particular, we analyze the high signal-to-noise regime which requires the use of high-rate (non-binary) low-density parity check codes.
arXiv Detail & Related papers (2021-03-30T18:02:55Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - MAP segmentation in Bayesian hidden Markov models: a case study [0.0]
We consider the problem of estimating the maximum posterior probability (MAP) state sequence for a finite state and finite emission alphabet hidden Markov model (HMM)
The main goal of the paper is to test the Bayesian setup against the frequentist one, where the parameters of HMM are estimated using the training data.
arXiv Detail & Related papers (2020-04-17T16:42:18Z) - Scalable Hybrid HMM with Gaussian Process Emission for Sequential
Time-series Data Clustering [13.845932997326571]
Hidden Markov Model (HMM) combined with Gaussian Process (GP) emission can be effectively used to estimate the hidden state.
This paper proposes a scalable learning method for HMM-GPSM.
arXiv Detail & Related papers (2020-01-07T07:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.