Model-based clustering using non-parametric Hidden Markov Models
- URL: http://arxiv.org/abs/2309.12238v2
- Date: Mon, 25 Sep 2023 13:12:43 GMT
- Title: Model-based clustering using non-parametric Hidden Markov Models
- Authors: Elisabeth Gassiat, Ibrahim Kaddouri, Zacharie Naulet
- Abstract summary: We study the Bayes risk of clustering when using HMMs and to propose associated clustering procedures.
Results are shown to remain valid in the online setting where observations are clustered sequentially.
- Score: 5.314335654467143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thanks to their dependency structure, non-parametric Hidden Markov Models
(HMMs) are able to handle model-based clustering without specifying group
distributions. The aim of this work is to study the Bayes risk of clustering
when using HMMs and to propose associated clustering procedures. We first give
a result linking the Bayes risk of classification and the Bayes risk of
clustering, which we use to identify the key quantity determining the
difficulty of the clustering task. We also give a proof of this result in the
i.i.d. framework, which might be of independent interest. Then we study the
excess risk of the plugin classifier. All these results are shown to remain
valid in the online setting where observations are clustered sequentially.
Simulations illustrate our findings.
Related papers
- Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - GCC: Generative Calibration Clustering [55.44944397168619]
We propose a novel Generative Clustering (GCC) method to incorporate feature learning and augmentation into clustering procedure.
First, we develop a discrimirative feature alignment mechanism to discover intrinsic relationship across real and generated samples.
Second, we design a self-supervised metric learning to generate more reliable cluster assignment.
arXiv Detail & Related papers (2024-04-14T01:51:11Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - clusterBMA: Bayesian model averaging for clustering [1.2021605201770345]
We introduce clusterBMA, a method that enables weighted model averaging across results from unsupervised clustering algorithms.
We use clustering internal validation criteria to develop an approximation of the posterior model probability, used for weighting the results from each model.
In addition to outperforming other ensemble clustering methods on simulated data, clusterBMA offers unique features including probabilistic allocation to averaged clusters.
arXiv Detail & Related papers (2022-09-09T04:55:20Z) - K-ARMA Models for Clustering Time Series Data [4.345882429229813]
We present an approach to clustering time series data using a model-based generalization of the K-Means algorithm.
We show how the clustering algorithm can be made robust to outliers using a least-absolute deviations criteria.
We perform experiments on real data which show that our method is competitive with other existing methods for similar time series clustering tasks.
arXiv Detail & Related papers (2022-06-30T18:16:11Z) - Self-Evolutionary Clustering [1.662966122370634]
Most existing deep clustering methods are based on simple distance comparison and highly dependent on the target distribution generated by a handcrafted nonlinear mapping.
A novel modular Self-Evolutionary Clustering (Self-EvoC) framework is constructed, which boosts the clustering performance by classification in a self-supervised manner.
The framework can efficiently discriminate sample outliers and generate better target distribution with the assistance of self-supervised.
arXiv Detail & Related papers (2022-02-21T19:38:18Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Vine copula mixture models and clustering for non-Gaussian data [0.0]
We propose a novel vine copula mixture model for continuous data.
We show that the model-based clustering algorithm with vine copula mixture models outperforms the other model-based clustering techniques.
arXiv Detail & Related papers (2021-02-05T16:04:26Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Robust M-Estimation Based Bayesian Cluster Enumeration for Real
Elliptically Symmetric Distributions [5.137336092866906]
Robustly determining optimal number of clusters in a data set is an essential factor in a wide range of applications.
This article generalizes so that it can be used with any arbitrary Really Symmetric (RES) distributed mixture model.
We derive a robust criterion for data sets with finite sample size, and also provide an approximation to reduce the computational cost at large sample sizes.
arXiv Detail & Related papers (2020-05-04T11:44:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.