Algebraic Reduction of Hidden Markov Models
- URL: http://arxiv.org/abs/2208.05968v2
- Date: Fri, 19 May 2023 09:43:26 GMT
- Title: Algebraic Reduction of Hidden Markov Models
- Authors: Tommaso Grigoletto and Francesco Ticozzi
- Abstract summary: We propose two algorithms that return models that exactly reproduce the single-time distribution of a given output process.
The reduction method exploits not only the structure of the observed output, but also its initial condition.
Optimal algorithms are derived for a class of HMM, namely ones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of reducing a Hidden Markov Model (HMM) to one of smaller
dimension that exactly reproduces the same marginals is tackled by using a
system-theoretic approach. Realization theory tools are extended to HMMs by
leveraging suitable algebraic representations of probability spaces. We propose
two algorithms that return coarse-grained equivalent HMMs obtained by
stochastic projection operators: the first returns models that exactly
reproduce the single-time distribution of a given output process, while in the
second the full (multi-time) distribution is preserved. The reduction method
exploits not only the structure of the observed output, but also its initial
condition, whenever the latter is known or belongs to a given subclass. Optimal
algorithms are derived for a class of HMM, namely observable ones.
Related papers
- Laplace Transform Based Low-Complexity Learning of Continuous Markov Semigroups [22.951644463554352]
This paper presents a data-driven approach for learning Markov processes through the spectral decomposition of the infinitesimal generator (IG) of the Markov semigroup.
Existing techniques, including physics-informed kernel regression, are computationally expensive and limited in scope.
We propose a novel method that leverages the IG's resolvent, characterized by the Laplace transform of transfer operators.
arXiv Detail & Related papers (2024-10-18T14:02:06Z) - Representation and De-interleaving of Mixtures of Hidden Markov Processes [3.7348616912887445]
De-interleaving of mixtures of Hidden Markov Processes (HMPs) generally depends on its representation model.
This paper proposes a novel representation model and corresponding de-interleaving methods for the mixtures of HMPs.
arXiv Detail & Related papers (2024-06-01T12:24:23Z) - Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games [66.2085181793014]
We show that a model-free stage-based Q-learning algorithm can enjoy the same optimality in the $H$ dependence as model-based algorithms.
Our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions.
arXiv Detail & Related papers (2023-08-17T08:34:58Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Coefficient-based Regularized Distribution Regression [4.21768682940933]
We consider the coefficient-based regularized distribution regression which aims to regress from probability measures to real-valued responses over a kernel reproducing Hilbert space (RKHS)
Asymptotic behaviors of the algorithm in different regularity ranges of the regression function are comprehensively studied.
We get the optimal rates under some mild conditions, which matches the one-stage sampled minimax optimal rate.
arXiv Detail & Related papers (2022-08-26T03:46:14Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - DenseHMM: Learning Hidden Markov Models by Learning Dense
Representations [0.0]
We propose a modification of Hidden Markov Models (HMMs) that allows to learn dense representations of both the hidden states and the observables.
Compared to the standard HMM, transition probabilities are not atomic but composed of these representations via kernelization.
The properties of the DenseHMM like learned co-occurrences and log-likelihoods are studied empirically on synthetic and biomedical datasets.
arXiv Detail & Related papers (2020-12-17T17:48:27Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Non-Convex Exact Community Recovery in Stochastic Block Model [31.221745716673546]
Community detection in graphs that are generated according to symmetric block models (SBMs) has received much attention lately.
We show that in the logarithmic sparsity regime of the problem, with high probability the proposed two-stage method can exactly recover the two communities down to the information-theoretic limit in $mathcalO(nlog2n/loglog n)$ time.
We also conduct numerical experiments on both synthetic and real data sets to demonstrate the efficacy of our proposed method and complement our theoretical development.
arXiv Detail & Related papers (2020-06-29T07:03:27Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.