Interpretable Fair Clustering
- URL: http://arxiv.org/abs/2511.21109v1
- Date: Wed, 26 Nov 2025 06:52:25 GMT
- Title: Interpretable Fair Clustering
- Authors: Mudi Jiang, Jiahui Zhou, Xinying Liu, Zengyou He, Zhikui Chen,
- Abstract summary: We propose an interpretable and fair clustering framework that integrates fairness constraints into the structure of decision trees.<n>Our approach constructs interpretable decision trees that partition the data while ensuring fair treatment across protected groups.
- Score: 14.310871812932193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fair clustering has gained increasing attention in recent years, especially in applications involving socially sensitive attributes. However, existing fair clustering methods often lack interpretability, limiting their applicability in high-stakes scenarios where understanding the rationale behind clustering decisions is essential. In this work, we address this limitation by proposing an interpretable and fair clustering framework, which integrates fairness constraints into the structure of decision trees. Our approach constructs interpretable decision trees that partition the data while ensuring fair treatment across protected groups. To further enhance the practicality of our framework, we also introduce a variant that requires no fairness hyperparameter tuning, achieved through post-pruning a tree constructed without fairness constraints. Extensive experiments on both real-world and synthetic datasets demonstrate that our method not only delivers competitive clustering performance and improved fairness, but also offers additional advantages such as interpretability and the ability to handle multiple sensitive attributes. These strengths enable our method to perform robustly under complex fairness constraints, opening new possibilities for equitable and transparent clustering.
Related papers
- Adversarial Fair Multi-View Clustering [7.650076926241037]
We propose an adversarial fair multi-view clustering (AFMVC) framework that integrates fairness learning into the representation learning process.<n>Our framework achieves superior fairness and competitive clustering performance compared to existing multi-view clustering and fairness-aware clustering methods.
arXiv Detail & Related papers (2025-08-06T04:07:08Z) - GCC: Generative Calibration Clustering [55.44944397168619]
We propose a novel Generative Clustering (GCC) method to incorporate feature learning and augmentation into clustering procedure.
First, we develop a discrimirative feature alignment mechanism to discover intrinsic relationship across real and generated samples.
Second, we design a self-supervised metric learning to generate more reliable cluster assignment.
arXiv Detail & Related papers (2024-04-14T01:51:11Z) - From Discrete to Continuous: Deep Fair Clustering With Transferable Representations [6.366934969620947]
We propose a flexible deep fair clustering method that can handle discrete and continuous sensitive attributes simultaneously.
Specifically, we design an information bottleneck style objective function to learn fair and clustering-friendly representations.
Unlike existing works, we impose fairness at the representation level, which could guarantee fairness for the transferred task.
arXiv Detail & Related papers (2024-03-24T15:48:29Z) - Optimal Decision Trees For Interpretable Clustering with Constraints
(Extended Version) [7.799182201815762]
Constrained clustering is a semi-supervised task that employs a limited amount of labelled data, formulated as constraints.
We present a novel SAT-based framework for interpretable clustering that supports clustering constraints.
We also present new insight into the trade-off between interpretability and satisfaction of such user-provided constraints.
arXiv Detail & Related papers (2023-01-30T05:34:49Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Deep Fair Discriminative Clustering [24.237000220172906]
We study a general notion of group-level fairness for binary and multi-state protected status variables (PSVs)
We propose a refinement learning algorithm to combine the clustering goal with the fairness objective to learn fair clusters adaptively.
Our framework shows promising results for novel clustering tasks including flexible fairness constraints, multi-state PSVs and predictive clustering.
arXiv Detail & Related papers (2021-05-28T23:50:48Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Learning to Generate Fair Clusters from Demonstrations [27.423983748614198]
We show how to identify the intended fairness constraint for a problem based on limited demonstrations from an expert.
We present an algorithm to identify the fairness metric from demonstrations and generate clusters using existing off-the-shelf clustering techniques.
We investigate how to generate interpretable solutions using our approach.
arXiv Detail & Related papers (2021-02-08T03:09:33Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.