Probability Bracket Notation: Markov Sequence Projector of Visible and Hidden Markov Models in Dynamic Bayesian Networks
- URL: http://arxiv.org/abs/1212.3817v2
- Date: Wed, 19 Feb 2025 21:09:18 GMT
- Title: Probability Bracket Notation: Markov Sequence Projector of Visible and Hidden Markov Models in Dynamic Bayesian Networks
- Authors: Xing M. Wang,
- Abstract summary: We introduce the Markov Sequence Projector (MSP) to expand the evolution formula of Homogeneous Markov Chains (HMCs)<n>In a Hidden Markov Model (HMM), the probability basis (P-basis) of the hidden Markov state sequence and the P-basis of the observation sequence exist in the sequential event space.<n>The Viterbi algorithm is applied to the famous Weather-Stone HMM example to determine the most likely weather-state sequence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the symbolic framework of Probability Bracket Notation (PBN), the Markov Sequence Projector (MSP) is introduced to expand the evolution formula of Homogeneous Markov Chains (HMCs). The well-known weather example, a Visible Markov Model (VMM), illustrates that the full joint probability of a VMM corresponds to a specifically projected Markov state sequence in the expanded evolution formula. In a Hidden Markov Model (HMM), the probability basis (P-basis) of the hidden Markov state sequence and the P-basis of the observation sequence exist in the sequential event space. The full joint probability of an HMM is the product of the (unknown) projected hidden sequence of Markov states and their transformations into the observation P-bases. The Viterbi algorithm is applied to the famous Weather-Stone HMM example to determine the most likely weather-state sequence given the observed stone-state sequence. Our results are verified using the Elvira software package. Using the PBN, we unify the evolution formulas for Markov models like VMMs, HMMs, and factorial HMMs (with discrete time). We briefly investigated the extended HMM, addressing the feedback issue, and the continuous-time VMM and HMM (with discrete or continuous states). All these models are subclasses of Dynamic Bayesian Networks (DBNs) essential for Machine Learning (ML) and Artificial Intelligence (AI).
Related papers
- Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models [70.26374282390401]
Decoding the original signal (i.e., hidden chain) from the noisy observations is one of the main goals in nearly all HMM based data analyses.
We present Quick Adaptive Ternary (QATS), a divide-and-conquer procedure which decodes the hidden sequence in polylogarithmic computational complexity.
arXiv Detail & Related papers (2023-05-29T19:37:48Z) - Learning Hidden Markov Models Using Conditional Samples [72.20944611510198]
This paper is concerned with the computational complexity of learning the Hidden Markov Model (HMM)
In this paper, we consider an interactive access model, in which the algorithm can query for samples from the conditional distributions of the HMMs.
Specifically, we obtain efficient algorithms for learning HMMs in settings where we have query access to the exact conditional probabilities.
arXiv Detail & Related papers (2023-02-28T16:53:41Z) - Markov Observation Models [0.0]
The Hidden Markov Model is expanded to allow for Markov chain observations.
The observations are assumed to be a Markov chain whose one step transition probabilities depend upon the hidden Markov chain.
An Expectation-Maximization algorithm is developed to estimate the transition probabilities for both the hidden state and for the observations.
arXiv Detail & Related papers (2022-08-12T16:53:07Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Fitting Sparse Markov Models to Categorical Time Series Using
Regularization [0.0]
A more general approach is called Sparse Markov Model (SMM), where all possible histories of order $m$ form a partition.
We develop an elegant method of fitting SMM using convex clustering, which involves regularization.
We apply this method to classify genome sequences, obtained from individuals affected by different viruses.
arXiv Detail & Related papers (2022-02-11T07:27:16Z) - Learning Circular Hidden Quantum Markov Models: A Tensor Network
Approach [34.77250498401055]
We show that c-HQMMs are equivalent to a constrained tensor network.
This equivalence enables us to provide an efficient learning model for c-HQMMs.
The proposed learning approach is evaluated on six real datasets.
arXiv Detail & Related papers (2021-10-29T23:09:31Z) - Complex Event Forecasting with Prediction Suffix Trees: Extended
Technical Report [70.7321040534471]
Complex Event Recognition (CER) systems have become popular in the past two decades due to their ability to "instantly" detect patterns on real-time streams of events.
There is a lack of methods for forecasting when a pattern might occur before such an occurrence is actually detected by a CER engine.
We present a formal framework that attempts to address the issue of Complex Event Forecasting.
arXiv Detail & Related papers (2021-09-01T09:52:31Z) - BERTifying the Hidden Markov Model for Multi-Source Weakly Supervised
Named Entity Recognition [57.2201011783393]
conditional hidden Markov model (CHMM)
CHMM predicts token-wise transition and emission probabilities from the BERT embeddings of the input tokens.
It fine-tunes a BERT-based NER model with the labels inferred by CHMM.
arXiv Detail & Related papers (2021-05-26T21:18:48Z) - Robust Classification using Hidden Markov Models and Mixtures of
Normalizing Flows [25.543231171094384]
We use a generative model that combines the state transitions of a hidden Markov model (HMM) and the neural network based probability distributions for the hidden states of the HMM.
We verify the improved robustness of NMM-HMM classifiers in an application to speech recognition.
arXiv Detail & Related papers (2021-02-15T00:40:30Z) - DenseHMM: Learning Hidden Markov Models by Learning Dense
Representations [0.0]
We propose a modification of Hidden Markov Models (HMMs) that allows to learn dense representations of both the hidden states and the observables.
Compared to the standard HMM, transition probabilities are not atomic but composed of these representations via kernelization.
The properties of the DenseHMM like learned co-occurrences and log-likelihoods are studied empirically on synthetic and biomedical datasets.
arXiv Detail & Related papers (2020-12-17T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.