Context-specific kernel-based hidden Markov model for time series
analysis
- URL: http://arxiv.org/abs/2301.09870v2
- Date: Mon, 15 May 2023 13:00:31 GMT
- Title: Context-specific kernel-based hidden Markov model for time series
analysis
- Authors: Carlos Puerto-Santana, Concha Bielza, Pedro Larra\~naga, Gustav Eje
Henter
- Abstract summary: We introduce a new hidden Markov model based on kernel density estimation.
It is capable of capturing kernel dependencies using context-specific Bayesian networks.
The benefits in likelihood and classification accuracy from the proposed model are quantified and analyzed.
- Score: 9.007829035130886
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Traditional hidden Markov models have been a useful tool to understand and
model stochastic dynamic data; in the case of non-Gaussian data, models such as
mixture of Gaussian hidden Markov models can be used. However, these suffer
from the computation of precision matrices and have a lot of unnecessary
parameters. As a consequence, such models often perform better when it is
assumed that all variables are independent, a hypothesis that may be
unrealistic. Hidden Markov models based on kernel density estimation are also
capable of modeling non-Gaussian data, but they assume independence between
variables. In this article, we introduce a new hidden Markov model based on
kernel density estimation, which is capable of capturing kernel dependencies
using context-specific Bayesian networks. The proposed model is described,
together with a learning algorithm based on the expectation-maximization
algorithm. Additionally, the model is compared to related HMMs on synthetic and
real data. From the results, the benefits in likelihood and classification
accuracy from the proposed model are quantified and analyzed.
Related papers
- Differentiable Calibration of Inexact Stochastic Simulation Models via Kernel Score Minimization [11.955062839855334]
We propose to learn differentiable input parameters of simulation models using output-level data via kernel score minimization with gradient descent.
We quantify the uncertainties of the learned input parameters using a new normality result that accounts for model inexactness.
arXiv Detail & Related papers (2024-11-08T04:13:52Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Finite Mixtures of Multivariate Poisson-Log Normal Factor Analyzers for
Clustering Count Data [0.8499685241219366]
A class of eight parsimonious mixture models based on the mixtures of factor analyzers model are introduced.
The proposed models are explored in the context of clustering discrete data arising from RNA sequencing studies.
arXiv Detail & Related papers (2023-11-13T21:23:15Z) - Stable Training of Probabilistic Models Using the Leave-One-Out Maximum Log-Likelihood Objective [0.7373617024876725]
Kernel density estimation (KDE) based models are popular choices for this task, but they fail to adapt to data regions with varying densities.
An adaptive KDE model is employed to circumvent this, where each kernel in the model has an individual bandwidth.
A modified expectation-maximization algorithm is employed to accelerate the optimization speed reliably.
arXiv Detail & Related papers (2023-10-05T14:08:42Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Learning Summary Statistics for Bayesian Inference with Autoencoders [58.720142291102135]
We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
arXiv Detail & Related papers (2022-01-28T12:00:31Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.