Significance-Based Categorical Data Clustering
- URL: http://arxiv.org/abs/2211.03956v1
- Date: Tue, 8 Nov 2022 02:06:31 GMT
- Title: Significance-Based Categorical Data Clustering
- Authors: Lianyu Hu, Mudi Jiang, Yan Liu, Zengyou He
- Abstract summary: We use the likelihood ratio test to derive a test statistic that can serve as a significance-based objective function in categorical data clustering.
A new clustering algorithm is proposed in which the significance-based objective function is optimized via a Monte Carlo search procedure.
- Score: 7.421725101465365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although numerous algorithms have been proposed to solve the categorical data
clustering problem, how to access the statistical significance of a set of
categorical clusters remains unaddressed. To fulfill this void, we employ the
likelihood ratio test to derive a test statistic that can serve as a
significance-based objective function in categorical data clustering.
Consequently, a new clustering algorithm is proposed in which the
significance-based objective function is optimized via a Monte Carlo search
procedure. As a by-product, we can further calculate an empirical $p$-value to
assess the statistical significance of a set of clusters and develop an
improved gap statistic for estimating the cluster number. Extensive
experimental studies suggest that our method is able to achieve comparable
performance to state-of-the-art categorical data clustering algorithms.
Moreover, the effectiveness of such a significance-based formulation on
statistical cluster validation and cluster number estimation is demonstrated
through comprehensive empirical results.
Related papers
- Hierarchical and Density-based Causal Clustering [6.082022112101251]
We propose plug-in estimators that are simple and readily implementable using off-the-shelf algorithms.
We go on to study their rate of convergence, and show that the additional cost of causal clustering is essentially the estimation error of the outcome regression functions.
arXiv Detail & Related papers (2024-11-02T14:01:04Z) - From A-to-Z Review of Clustering Validation Indices [4.08908337437878]
We review and evaluate the performance of internal and external clustering validation indices on the most common clustering algorithms.
We suggest a classification framework for examining the functionality of both internal and external clustering validation measures.
arXiv Detail & Related papers (2024-07-18T13:52:02Z) - Interpretable Clustering with the Distinguishability Criterion [0.4419843514606336]
We present a global criterion called the Distinguishability criterion to quantify the separability of identified clusters and validate inferred cluster configurations.
We propose a combined loss function-based computational framework that integrates the Distinguishability criterion with many commonly used clustering procedures.
We present these new algorithms as well as the results from comprehensive data analysis based on simulation studies and real data applications.
arXiv Detail & Related papers (2024-04-24T16:38:15Z) - A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - A testing-based approach to assess the clusterability of categorical
data [6.7937877930001775]
TestCat is a testing-based approach to assess the clusterability of categorical data in terms of an analytical $p$-value.
We apply our method to a set of benchmark categorical data sets, showing that TestCat outperforms those solutions for numeric data.
arXiv Detail & Related papers (2023-07-14T13:50:00Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Detection and Evaluation of Clusters within Sequential Data [58.720142291102135]
Clustering algorithms for Block Markov Chains possess theoretical optimality guarantees.
In particular, our sequential data is derived from human DNA, written text, animal movement data and financial markets.
It is found that the Block Markov Chain model assumption can indeed produce meaningful insights in exploratory data analyses.
arXiv Detail & Related papers (2022-10-04T15:22:39Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - Gradient Based Clustering [72.15857783681658]
We propose a general approach for distance based clustering, using the gradient of the cost function that measures clustering quality.
The approach is an iterative two step procedure (alternating between cluster assignment and cluster center updates) and is applicable to a wide range of functions.
arXiv Detail & Related papers (2022-02-01T19:31:15Z) - A review of systematic selection of clustering algorithms and their
evaluation [0.0]
This paper aims to identify a systematic selection logic for clustering algorithms and corresponding validation concepts.
The goal is to enable potential users to choose an algorithm that fits best to their needs and the properties of their underlying data clustering problem.
arXiv Detail & Related papers (2021-06-24T07:01:46Z) - Scalable Hierarchical Agglomerative Clustering [65.66407726145619]
Existing scalable hierarchical clustering methods sacrifice quality for speed.
We present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points.
arXiv Detail & Related papers (2020-10-22T15:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.