Fair Clustering: A Causal Perspective
- URL: http://arxiv.org/abs/2312.09061v1
- Date: Thu, 14 Dec 2023 15:58:03 GMT
- Title: Fair Clustering: A Causal Perspective
- Authors: Fritz Bayer, Drago Plecko, Niko Beerenwinkel, Jack Kuipers
- Abstract summary: We show that optimising for non-causal fairness notions can paradoxically induce direct discriminatory effects from a causal standpoint.
We present a clustering approach that incorporates causal fairness metrics to provide a more nuanced approach to fairness in unsupervised learning.
- Score: 5.885238773559016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clustering algorithms may unintentionally propagate or intensify existing
disparities, leading to unfair representations or biased decision-making.
Current fair clustering methods rely on notions of fairness that do not capture
any information on the underlying causal mechanisms. We show that optimising
for non-causal fairness notions can paradoxically induce direct discriminatory
effects from a causal standpoint. We present a clustering approach that
incorporates causal fairness metrics to provide a more nuanced approach to
fairness in unsupervised learning. Our approach enables the specification of
the causal fairness metrics that should be minimised. We demonstrate the
efficacy of our methodology using datasets known to harbour unfair biases.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fair Correlation Clustering in Forests [8.810926150873991]
A clustering is said to be fair, if each cluster has the same distribution of manifestations of a sensitive attribute as the whole input set.
This is motivated by various applications where the objects to be clustered have sensitive attributes that should not be over- or underrepresented.
We consider restricted graph classes which allow us to characterize the distributions of sensitive attributes for which this form of fairness is tractable.
arXiv Detail & Related papers (2023-02-22T11:27:06Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Fair Inference for Discrete Latent Variable Models [12.558187319452657]
Machine learning models, trained on data without due care, often exhibit unfair and discriminatory behavior against certain populations.
We develop a fair variational inference technique for the discrete latent variables, which is accomplished by including a fairness penalty on the variational distribution.
To demonstrate the generality of our approach and its potential for real-world impact, we then develop a special-purpose graphical model for criminal justice risk assessments.
arXiv Detail & Related papers (2022-09-15T04:54:21Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - A Pairwise Fair and Community-preserving Approach to k-Center Clustering [34.386585230600716]
Clustering is a foundational problem in machine learning with numerous applications.
We define two new types of fairness in the clustering setting, pairwise fairness and community preservation.
arXiv Detail & Related papers (2020-07-14T22:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.