GroupMixNorm Layer for Learning Fair Models
- URL: http://arxiv.org/abs/2312.11969v1
- Date: Tue, 19 Dec 2023 09:04:26 GMT
- Title: GroupMixNorm Layer for Learning Fair Models
- Authors: Anubha Pandey, Aditi Rai, Maneet Singh, Deepak Bhatt, Tanmoy Bhowmik
- Abstract summary: This research proposes a novel in-processing based GroupMixNorm layer for mitigating bias from deep learning models.
The proposed method improves upon several fairness metrics with minimal impact on overall accuracy.
- Score: 4.324785083027206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research has identified discriminatory behavior of automated
prediction algorithms towards groups identified on specific protected
attributes (e.g., gender, ethnicity, age group, etc.). When deployed in
real-world scenarios, such techniques may demonstrate biased predictions
resulting in unfair outcomes. Recent literature has witnessed algorithms for
mitigating such biased behavior mostly by adding convex surrogates of fairness
metrics such as demographic parity or equalized odds in the loss function,
which are often not easy to estimate. This research proposes a novel
in-processing based GroupMixNorm layer for mitigating bias from deep learning
models. The GroupMixNorm layer probabilistically mixes group-level feature
statistics of samples across different groups based on the protected attribute.
The proposed method improves upon several fairness metrics with minimal impact
on overall accuracy. Analysis on benchmark tabular and image datasets
demonstrates the efficacy of the proposed method in achieving state-of-the-art
performance. Further, the experimental analysis also suggests the robustness of
the GroupMixNorm layer against new protected attributes during inference and
its utility in eliminating bias from a pre-trained network.
Related papers
- A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - When Fairness Meets Privacy: Exploring Privacy Threats in Fair Binary Classifiers via Membership Inference Attacks [17.243744418309593]
We propose an efficient MIA method against fairness-enhanced models based on fairness discrepancy results.
We also explore potential strategies for mitigating privacy leakages.
arXiv Detail & Related papers (2023-11-07T10:28:17Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Affinity Clustering Framework for Data Debiasing Using Pairwise
Distribution Discrepancy [10.184056098238765]
Group imbalance, resulting from inadequate or unrepresentative data collection methods, is a primary cause of representation bias in datasets.
This paper presents MASC, a data augmentation approach that leverages affinity clustering to balance the representation of non-protected and protected groups of a target dataset.
arXiv Detail & Related papers (2023-06-02T17:18:20Z) - Fairness in Visual Clustering: A Novel Transformer Clustering Approach [32.806921406869996]
We first evaluate demographic bias in deep clustering models from the perspective of cluster purity.
A novel loss function is introduced to encourage a purity consistency for all clusters to maintain the fairness aspect.
We present a novel attention mechanism, Cross-attention, to measure correlations between multiple clusters.
arXiv Detail & Related papers (2023-04-14T21:59:32Z) - fAux: Testing Individual Fairness via Gradient Alignment [2.5329739965085785]
We describe a new approach for testing individual fairness that does not have either requirement.
We show that the proposed method effectively identifies discrimination on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-10-10T21:27:20Z) - Fair mapping [0.0]
We propose a novel pre-processing method based on the transformation of the distribution of protected groups onto a chosen target one.
We leverage on the recent works of the Wasserstein GAN and AttGAN frameworks to achieve the optimal transport of data points.
Our proposed approach, preserves the interpretability of data and can be used without defining exactly the sensitive groups.
arXiv Detail & Related papers (2022-09-01T17:31:27Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - LOGAN: Local Group Bias Detection by Clustering [86.38331353310114]
We argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model.
We propose LOGAN, a new bias detection technique based on clustering.
Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region.
arXiv Detail & Related papers (2020-10-06T16:42:51Z) - An Investigation of Why Overparameterization Exacerbates Spurious
Correlations [98.3066727301239]
We identify two key properties of the training data that drive this behavior.
We show how the inductive bias of models towards "memorizing" fewer examples can cause over parameterization to hurt.
arXiv Detail & Related papers (2020-05-09T01:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.