Decorrelated Clustering with Data Selection Bias
- URL: http://arxiv.org/abs/2006.15874v2
- Date: Thu, 2 Jul 2020 15:04:28 GMT
- Title: Decorrelated Clustering with Data Selection Bias
- Authors: Xiao Wang, Shaohua Fan, Kun Kuang, Chuan Shi, Jiawei Liu and Bai Wang
- Abstract summary: We propose a novel Decorrelation regularized K-Means algorithm (DCKM) for clustering with data selection bias.
Our DCKM algorithm achieves significant performance gains, indicating the necessity of removing unexpected feature correlations induced by selection bias.
- Score: 55.91842043124102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of existing clustering algorithms are proposed without considering the
selection bias in data. In many real applications, however, one cannot
guarantee the data is unbiased. Selection bias might bring the unexpected
correlation between features and ignoring those unexpected correlations will
hurt the performance of clustering algorithms. Therefore, how to remove those
unexpected correlations induced by selection bias is extremely important yet
largely unexplored for clustering. In this paper, we propose a novel
Decorrelation regularized K-Means algorithm (DCKM) for clustering with data
selection bias. Specifically, the decorrelation regularizer aims to learn the
global sample weights which are capable of balancing the sample distribution,
so as to remove unexpected correlations among features. Meanwhile, the learned
weights are combined with k-means, which makes the reweighted k-means cluster
on the inherent data distribution without unexpected correlation influence.
Moreover, we derive the updating rules to effectively infer the parameters in
DCKM. Extensive experiments results on real world datasets well demonstrate
that our DCKM algorithm achieves significant performance gains, indicating the
necessity of removing unexpected feature correlations induced by selection bias
when clustering.
Related papers
- Towards Robust Text Classification: Mitigating Spurious Correlations with Causal Learning [2.7813683000222653]
We propose the Causally Calibrated Robust ( CCR) to reduce models' reliance on spurious correlations.
CCR integrates a causal feature selection method based on counterfactual reasoning, along with an inverse propensity weighting (IPW) loss function.
We show that CCR state-of-the-art performance among methods without group labels, and in some cases, it can compete with the models that utilize group labels.
arXiv Detail & Related papers (2024-11-01T21:29:07Z) - Cluster Metric Sensitivity to Irrelevant Features [0.0]
We show how different types of irrelevant variables can impact the outcome of a clustering result from $k$-means in different ways.
Our results show that the Silhouette Coefficient and the Davies-Bouldin score are the most sensitive to irrelevant added features.
arXiv Detail & Related papers (2024-02-19T10:02:00Z) - Sanitized Clustering against Confounding Bias [38.928080236294775]
This paper presents a new clustering framework named Sanitized Clustering Against confounding Bias (SCAB)
SCAB removes the confounding factor in the semantic latent space of complex data through a non-linear dependence measure.
Experiments on complex datasets demonstrate that our SCAB achieves a significant gain in clustering performance.
arXiv Detail & Related papers (2023-11-02T14:10:14Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with
Clustered Aggregation and Knowledge DIStilled Regularization [3.3711670942444014]
Federated learning enables edge devices to train a global model collaboratively without exposing their data.
We tackle a new type of Non-IID data, called cluster-skewed non-IID, discovered in actual data sets.
We propose an aggregation scheme that guarantees equality between clusters.
arXiv Detail & Related papers (2023-02-21T02:53:37Z) - Data thinning for convolution-closed distributions [2.299914829977005]
We propose data thinning, an approach for splitting an observation into two or more independent parts that sum to the original observation.
We show that data thinning can be used to validate the results of unsupervised learning approaches.
arXiv Detail & Related papers (2023-01-18T02:47:41Z) - Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence
Embedding [51.48582649050054]
We propose a representation normalization method which aims at disentangling the correlations between features of encoded sentences.
We also propose Kernel-Whitening, a Nystrom kernel approximation method to achieve more thorough debiasing on nonlinear spurious correlations.
Experiments show that Kernel-Whitening significantly improves the performance of BERT on out-of-distribution datasets while maintaining in-distribution accuracy.
arXiv Detail & Related papers (2022-10-14T05:56:38Z) - Debiased Graph Neural Networks with Agnostic Label Selection Bias [59.61301255860836]
Most existing Graph Neural Networks (GNNs) are proposed without considering the selection bias in data.
We propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer.
Our proposed model outperforms the state-of-the-art methods and DGNN is a flexible framework to enhance existing GNNs.
arXiv Detail & Related papers (2022-01-19T16:50:29Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.