A Probabilistic Semi-Supervised Approach with Triplet Markov Chains
- URL: http://arxiv.org/abs/2309.03707v1
- Date: Thu, 7 Sep 2023 13:34:20 GMT
- Title: A Probabilistic Semi-Supervised Approach with Triplet Markov Chains
- Authors: Katherine Morales, Yohan Petetin
- Abstract summary: Triplet Markov chains are general generative models for sequential data.
We propose a general framework based on a variational Bayesian inference to train parameterized triplet Markov chain models.
- Score: 1.000779758350696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Triplet Markov chains are general generative models for sequential data which
take into account three kinds of random variables: (noisy) observations, their
associated discrete labels and latent variables which aim at strengthening the
distribution of the observations and their associated labels. However, in
practice, we do not have at our disposal all the labels associated to the
observations to estimate the parameters of such models. In this paper, we
propose a general framework based on a variational Bayesian inference to train
parameterized triplet Markov chain models in a semi-supervised context. The
generality of our approach enables us to derive semi-supervised algorithms for
a variety of generative models for sequential Bayesian classification.
Related papers
- Human-in-the-loop: Towards Label Embeddings for Measuring Classification Difficulty [14.452983136429967]
In supervised learning, uncertainty can already occur in the first stage of the training process, the annotation phase.
The main idea of this work is to drop the assumption of a ground truth label and instead embed the annotations into a multidimensional space.
The methods developed in this paper readily extend to various situations where multiple annotators independently label instances.
arXiv Detail & Related papers (2023-11-15T11:23:15Z) - A modern approach to transition analysis and process mining with Markov
models: A tutorial with R [0.9699640804685629]
The chapter provides an introduction to this method and differentiates between its most common variations.
In addition to a thorough explanation and contextualization within the existing literature, the chapter provides a step-by-step tutorial on how to implement each type of Markovian model.
arXiv Detail & Related papers (2023-09-02T07:24:32Z) - Covariate shift in nonparametric regression with Markovian design [0.0]
We show that convergence rates for a smoothness risk of a Nadaraya-Watson kernel estimator are determined by the similarity between the invariant distributions associated to source and target Markov chains.
We extend the notion of a distribution exponent from Kpotufe and Martinet to kernel transfer exponents of uniformly ergodic Markov chains.
arXiv Detail & Related papers (2023-07-17T14:24:27Z) - Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - Leveraging Instance Features for Label Aggregation in Programmatic Weak
Supervision [75.1860418333995]
Programmatic Weak Supervision (PWS) has emerged as a widespread paradigm to synthesize training labels efficiently.
The core component of PWS is the label model, which infers true labels by aggregating the outputs of multiple noisy supervision sources as labeling functions.
Existing statistical label models typically rely only on the outputs of LF, ignoring the instance features when modeling the underlying generative process.
arXiv Detail & Related papers (2022-10-06T07:28:53Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z) - Estimation of Switched Markov Polynomial NARX models [75.91002178647165]
We identify a class of models for hybrid dynamical systems characterized by nonlinear autoregressive (NARX) components.
The proposed approach is demonstrated on a SMNARX problem composed by three nonlinear sub-models with specific regressors.
arXiv Detail & Related papers (2020-09-29T15:00:47Z) - Semi-supervised Neural Chord Estimation Based on a Variational
Autoencoder with Latent Chord Labels and Features [18.498244371257304]
This paper describes a statistically-principled semi-supervised method of automatic chord estimation.
It can make effective use of music signals regardless of the availability of chord annotations.
arXiv Detail & Related papers (2020-05-14T15:58:36Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.