Towards Cohesion-Fairness Harmony: Contrastive Regularization in
Individual Fair Graph Clustering
- URL: http://arxiv.org/abs/2402.10756v1
- Date: Fri, 16 Feb 2024 15:25:56 GMT
- Title: Towards Cohesion-Fairness Harmony: Contrastive Regularization in
Individual Fair Graph Clustering
- Authors: Siamak Ghodsi, Seyed Amjad Seyedi, and Eirini Ntoutsi
- Abstract summary: iFairNMTF is an individual Fairness Nonnegative Matrix Tri-Factorization model with contrastive fairness regularization.
Our model allows for customizable accuracy-fairness trade-offs, thereby enhancing user autonomy.
- Score: 5.255750357176021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional fair graph clustering methods face two primary challenges: i)
They prioritize balanced clusters at the expense of cluster cohesion by
imposing rigid constraints, ii) Existing methods of both individual and
group-level fairness in graph partitioning mostly rely on eigen decompositions
and thus, generally lack interpretability. To address these issues, we propose
iFairNMTF, an individual Fairness Nonnegative Matrix Tri-Factorization model
with contrastive fairness regularization that achieves balanced and cohesive
clusters. By introducing fairness regularization, our model allows for
customizable accuracy-fairness trade-offs, thereby enhancing user autonomy
without compromising the interpretability provided by nonnegative matrix
tri-factorization. Experimental evaluations on real and synthetic datasets
demonstrate the superior flexibility of iFairNMTF in achieving fairness and
clustering performance.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Deep Fair Discriminative Clustering [24.237000220172906]
We study a general notion of group-level fairness for binary and multi-state protected status variables (PSVs)
We propose a refinement learning algorithm to combine the clustering goal with the fairness objective to learn fair clusters adaptively.
Our framework shows promising results for novel clustering tasks including flexible fairness constraints, multi-state PSVs and predictive clustering.
arXiv Detail & Related papers (2021-05-28T23:50:48Z) - Fair Mixup: Fairness via Interpolation [28.508444261249423]
We propose fair mixup, a new data augmentation strategy for imposing the fairness constraint.
We show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups.
We empirically show that it ensures a better generalization for both accuracy and fairness measurement in benchmarks.
arXiv Detail & Related papers (2021-03-11T06:57:26Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.