Probabilistic Systems with Hidden State and Unobservable Transitions
- URL: http://arxiv.org/abs/2205.13871v1
- Date: Fri, 27 May 2022 10:06:04 GMT
- Title: Probabilistic Systems with Hidden State and Unobservable Transitions
- Authors: Rebecca Bernemann, Barbara K\"onig, Matthias Schaffeld, Torben Weis
- Abstract summary: We consider probabilistic systems with hidden state and unobservable transitions.
We present an algorithm for determining the most probable explanation given an observation.
We also present a method for parameter learning that adapts the probabilities of a given model based on an observation.
- Score: 5.124254186899053
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We consider probabilistic systems with hidden state and unobservable
transitions, an extension of Hidden Markov Models (HMMs) that in particular
admits unobservable {\epsilon}-transitions (also called null transitions),
allowing state changes of which the observer is unaware. Due to the presence of
{\epsilon}-loops this additional feature complicates the theory and requires to
carefully set up the corresponding probability space and random variables. In
particular we present an algorithm for determining the most probable
explanation given an observation (a generalization of the Viterbi algorithm for
HMMs) and a method for parameter learning that adapts the probabilities of a
given model based on an observation (a generalization of the Baum-Welch
algorithm). The latter algorithm guarantees that the given observation has a
higher (or equal) probability after adjustment of the parameters and its
correctness can be derived directly from the so-called EM algorithm.
Related papers
- Learning Algorithms for Verification of Markov Decision Processes [20.5951492453299]
We present a general framework for applying learning algorithms to the verification of Markov decision processes (MDPs)
The presented framework focuses on probabilistic reachability, which is a core problem in verification.
arXiv Detail & Related papers (2024-03-14T08:54:19Z) - Probability vector representation of the Schrödinger equation and Leggett-Garg-type experiments [0.0]
Leggett-Garg inequalities place bounds on the temporal correlations of a system based on the principles of macroscopic realism.
We propose a scheme to describe the dynamics of generic $N$-level quantum systems via a probability vector representation of the Schr"odinger equation.
We also define a precise notion of no-signaling in time (NSIT) for the probability distributions of noncommuting observables.
arXiv Detail & Related papers (2023-12-26T19:00:00Z) - Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models [70.26374282390401]
Decoding the original signal (i.e., hidden chain) from the noisy observations is one of the main goals in nearly all HMM based data analyses.
We present Quick Adaptive Ternary (QATS), a divide-and-conquer procedure which decodes the hidden sequence in polylogarithmic computational complexity.
arXiv Detail & Related papers (2023-05-29T19:37:48Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Markov Observation Models [0.0]
The Hidden Markov Model is expanded to allow for Markov chain observations.
The observations are assumed to be a Markov chain whose one step transition probabilities depend upon the hidden Markov chain.
An Expectation-Maximization algorithm is developed to estimate the transition probabilities for both the hidden state and for the observations.
arXiv Detail & Related papers (2022-08-12T16:53:07Z) - Computationally Efficient PAC RL in POMDPs with Latent Determinism and
Conditional Embeddings [97.12538243736705]
We study reinforcement learning with function approximation for large-scale Partially Observable Decision Processes (POMDPs)
Our algorithm provably scales to large-scale POMDPs.
arXiv Detail & Related papers (2022-06-24T05:13:35Z) - Scalable Stochastic Parametric Verification with Stochastic Variational
Smoothed Model Checking [1.5293427903448025]
Smoothed model checking (smMC) aims at inferring the satisfaction function over the entire parameter space from a limited set of observations.
In this paper, we exploit recent advances in probabilistic machine learning to push this limitation forward.
We compare the performances of smMC against those of SV-smMC by looking at the scalability, the computational efficiency and the accuracy of the reconstructed satisfaction function.
arXiv Detail & Related papers (2022-05-11T10:43:23Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Identification and Adaptation with Binary-Valued Observations under
Non-Persistent Excitation Condition [1.6897716547971817]
We propose an online projected Quasi-Newton type algorithm for estimation of parameter estimation of regression models with binary-valued observations.
We establish the strong consistency of the estimation algorithm and provide the convergence rate.
Convergence of adaptive predictors and their applications in adaptive control are also discussed.
arXiv Detail & Related papers (2021-07-08T03:57:50Z) - Sinkhorn Natural Gradient for Generative Models [125.89871274202439]
We propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
arXiv Detail & Related papers (2020-11-09T02:51:17Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.