A parallelizable model-based approach for marginal and multivariate
clustering
- URL: http://arxiv.org/abs/2212.04009v1
- Date: Wed, 7 Dec 2022 23:54:41 GMT
- Title: A parallelizable model-based approach for marginal and multivariate
clustering
- Authors: Miguel de Carvalho, Gabriel Martos Venturini, Andrej Svetlo\v{s}\'ak
- Abstract summary: This paper develops a clustering method that takes advantage of the sturdiness of model-based clustering.
We tackle this issue by specifying a finite mixture model per margin that allows each margin to have a different number of clusters.
The proposed approach is computationally appealing as well as more tractable for moderate to high dimensions than a full' (joint) model-based clustering approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper develops a clustering method that takes advantage of the
sturdiness of model-based clustering, while attempting to mitigate some of its
pitfalls. First, we note that standard model-based clustering likely leads to
the same number of clusters per margin, which seems a rather artificial
assumption for a variety of datasets. We tackle this issue by specifying a
finite mixture model per margin that allows each margin to have a different
number of clusters, and then cluster the multivariate data using a strategy
game-inspired algorithm to which we call Reign-and-Conquer. Second, since the
proposed clustering approach only specifies a model for the margins -- but
leaves the joint unspecified -- it has the advantage of being partially
parallelizable; hence, the proposed approach is computationally appealing as
well as more tractable for moderate to high dimensions than a `full' (joint)
model-based clustering approach. A battery of numerical experiments on
artificial data indicate an overall good performance of the proposed methods in
a variety of scenarios, and real datasets are used to showcase their
application in practice.
Related papers
- Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - A Generalized Framework for Predictive Clustering and Optimization [18.06697544912383]
Clustering is a powerful and extensively used data science tool.
In this article, we define a generalized optimization framework for predictive clustering.
We also present a joint optimization strategy that exploits mixed-integer linear programming (MILP) for global optimization.
arXiv Detail & Related papers (2023-05-07T19:56:51Z) - Time series clustering based on prediction accuracy of global
forecasting models [0.0]
A novel method to perform model-based clustering of time series is proposed in this paper.
Unlike most techniques proposed in the literature, the method considers the predictive accuracy as the main element for constructing the clustering partition.
An extensive simulation study shows that our method outperforms several alternative techniques concerning both clustering effectiveness and predictive accuracy.
arXiv Detail & Related papers (2023-04-30T13:12:19Z) - Variable Clustering via Distributionally Robust Nodewise Regression [7.289979396903827]
We study a multi-factor block model for variable clustering and connect it to the regularized subspace clustering by formulating a distributionally robust version of the nodewise regression.
We derive a convex relaxation, provide guidance on selecting the size of the robust region, and hence the regularization weighting parameter, based on the data, and propose an ADMM algorithm for implementation.
arXiv Detail & Related papers (2022-12-15T16:23:25Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - clusterBMA: Bayesian model averaging for clustering [1.2021605201770345]
We introduce clusterBMA, a method that enables weighted model averaging across results from unsupervised clustering algorithms.
We use clustering internal validation criteria to develop an approximation of the posterior model probability, used for weighting the results from each model.
In addition to outperforming other ensemble clustering methods on simulated data, clusterBMA offers unique features including probabilistic allocation to averaged clusters.
arXiv Detail & Related papers (2022-09-09T04:55:20Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - Deep Conditional Gaussian Mixture Model for Constrained Clustering [7.070883800886882]
Constrained clustering can leverage prior information on a growing amount of only partially labeled data.
We propose a novel framework for constrained clustering that is intuitive, interpretable, and can be trained efficiently in the framework of gradient variational inference.
arXiv Detail & Related papers (2021-06-11T13:38:09Z) - Scalable Hierarchical Agglomerative Clustering [65.66407726145619]
Existing scalable hierarchical clustering methods sacrifice quality for speed.
We present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points.
arXiv Detail & Related papers (2020-10-22T15:58:35Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.