clusterBMA: Bayesian model averaging for clustering
- URL: http://arxiv.org/abs/2209.04117v2
- Date: Sun, 26 Mar 2023 03:10:56 GMT
- Title: clusterBMA: Bayesian model averaging for clustering
- Authors: Owen Forbes, Edgar Santos-Fernandez, Paul Pao-Yen Wu, Hong-Bo Xie,
Paul E. Schwenn, Jim Lagopoulos, Lia Mills, Dashiell D. Sacks, Daniel F.
Hermens, Kerrie Mengersen
- Abstract summary: We introduce clusterBMA, a method that enables weighted model averaging across results from unsupervised clustering algorithms.
We use clustering internal validation criteria to develop an approximation of the posterior model probability, used for weighting the results from each model.
In addition to outperforming other ensemble clustering methods on simulated data, clusterBMA offers unique features including probabilistic allocation to averaged clusters.
- Score: 1.2021605201770345
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Various methods have been developed to combine inference across multiple sets
of results for unsupervised clustering, within the ensemble clustering
literature. The approach of reporting results from one `best' model out of
several candidate clustering models generally ignores the uncertainty that
arises from model selection, and results in inferences that are sensitive to
the particular model and parameters chosen. Bayesian model averaging (BMA) is a
popular approach for combining results across multiple models that offers some
attractive benefits in this setting, including probabilistic interpretation of
the combined cluster structure and quantification of model-based uncertainty.
In this work we introduce clusterBMA, a method that enables weighted model
averaging across results from multiple unsupervised clustering algorithms. We
use clustering internal validation criteria to develop an approximation of the
posterior model probability, used for weighting the results from each model.
From a consensus matrix representing a weighted average of the clustering
solutions across models, we apply symmetric simplex matrix factorisation to
calculate final probabilistic cluster allocations. In addition to outperforming
other ensemble clustering methods on simulated data, clusterBMA offers unique
features including probabilistic allocation to averaged clusters, combining
allocation probabilities from 'hard' and 'soft' clustering algorithms, and
measuring model-based uncertainty in averaged cluster allocation. This method
is implemented in an accompanying R package of the same name.
Related papers
- Mixture of multilayer stochastic block models for multiview clustering [0.0]
We propose an original method for aggregating multiple clustering coming from different sources of information.
The identifiability of the model parameters is established and a variational Bayesian EM algorithm is proposed for the estimation of these parameters.
The method is utilized to analyze global food trading networks, leading to structures of interest.
arXiv Detail & Related papers (2024-01-09T17:15:47Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Time series clustering based on prediction accuracy of global
forecasting models [0.0]
A novel method to perform model-based clustering of time series is proposed in this paper.
Unlike most techniques proposed in the literature, the method considers the predictive accuracy as the main element for constructing the clustering partition.
An extensive simulation study shows that our method outperforms several alternative techniques concerning both clustering effectiveness and predictive accuracy.
arXiv Detail & Related papers (2023-04-30T13:12:19Z) - A parallelizable model-based approach for marginal and multivariate
clustering [0.0]
This paper develops a clustering method that takes advantage of the sturdiness of model-based clustering.
We tackle this issue by specifying a finite mixture model per margin that allows each margin to have a different number of clusters.
The proposed approach is computationally appealing as well as more tractable for moderate to high dimensions than a full' (joint) model-based clustering approach.
arXiv Detail & Related papers (2022-12-07T23:54:41Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - Clustering Ensemble Meets Low-rank Tensor Approximation [50.21581880045667]
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one.
We propose a novel low-rank tensor approximation-based method to solve the problem from a global perspective.
Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods.
arXiv Detail & Related papers (2020-12-16T13:01:37Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Blocked Clusterwise Regression [0.0]
We generalize previous approaches to discrete unobserved heterogeneity by allowing each unit to have multiple latent variables.
We contribute to the theory of clustering with an over-specified number of clusters and derive new convergence rates for this setting.
arXiv Detail & Related papers (2020-01-29T23:29:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.