Differential Adjusted Parity for Learning Fair Representations
- URL: http://arxiv.org/abs/2502.09765v1
- Date: Thu, 13 Feb 2025 20:48:26 GMT
- Title: Differential Adjusted Parity for Learning Fair Representations
- Authors: Bucher Sahyouni, Matthew Vowels, Liqun Chen, Simon Hadfield,
- Abstract summary: Development of fair and unbiased machine learning models remains an ongoing objective for researchers in the field of artificial intelligence.
We introduce the Differential Adjusted Parity (DAP) loss to produce unbiased informative representations.
- Score: 14.267727134658292
- License:
- Abstract: The development of fair and unbiased machine learning models remains an ongoing objective for researchers in the field of artificial intelligence. We introduce the Differential Adjusted Parity (DAP) loss to produce unbiased informative representations. It utilises a differentiable variant of the adjusted parity metric to create a unified objective function. By combining downstream task classification accuracy and its inconsistency across sensitive feature domains, it provides a single tool to increase performance and mitigate bias. A key element in this approach is the use of soft balanced accuracies. In contrast to previous non-adversarial approaches, DAP does not suffer a degeneracy where the metric is satisfied by performing equally poorly across all sensitive domains. It outperforms several adversarial models on downstream task accuracy and fairness in our analysis. Specifically, it improves the demographic parity, equalized odds and sensitive feature accuracy by as much as 22.5\%, 44.1\% and 40.1\%, respectively, when compared to the best performing adversarial approaches on these metrics. Overall, the DAP loss and its associated metric can play a significant role in creating more fair machine learning models.
Related papers
- Fair Federated Learning under Domain Skew with Local Consistency and Domain Diversity [40.84004643341614]
Federated learning (FL) has emerged as a new paradigm for privacy-preserving collaborative training.
Under domain skew, the current FL approaches are biased and face two fairness problems.
arXiv Detail & Related papers (2024-05-26T14:29:10Z) - Understanding the Detrimental Class-level Effects of Data Augmentation [63.1733767714073]
achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
arXiv Detail & Related papers (2023-12-07T18:37:43Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for
Hate Speech Detection [8.841221697099687]
We introduce a differentiable measure that enables direct optimization of group fairness in model training.
We evaluate our methods on the specific task of hate speech detection.
Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - FairPrune: Achieving Fairness Through Pruning for Dermatological Disease
Diagnosis [17.508632873527525]
We propose a method, FairPrune, that achieves fairness by pruning.
We show that our method can greatly improve fairness while keeping the average accuracy of both groups as high as possible.
arXiv Detail & Related papers (2022-03-04T02:57:34Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - DAPPER: Label-Free Performance Estimation after Personalization for
Heterogeneous Mobile Sensing [95.18236298557721]
We present DAPPER (Domain AdaPtation Performance EstimatoR) that estimates the adaptation performance in a target domain with unlabeled target data.
Our evaluation with four real-world sensing datasets compared against six baselines shows that DAPPER outperforms the state-of-the-art baseline by 39.8% in estimation accuracy.
arXiv Detail & Related papers (2021-11-22T08:49:33Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Adversarial Learning for Counterfactual Fairness [15.302633901803526]
In recent years, fairness has become an important topic in the machine learning research community.
We propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties.
Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
arXiv Detail & Related papers (2020-08-30T09:06:03Z) - Precise Tradeoffs in Adversarial Training for Linear Regression [55.764306209771405]
We provide a precise and comprehensive understanding of the role of adversarial training in the context of linear regression with Gaussian features.
We precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach.
Our theory for adversarial training algorithms also facilitates the rigorous study of how a variety of factors (size and quality of training data, model overparametrization etc.) affect the tradeoff between these two competing accuracies.
arXiv Detail & Related papers (2020-02-24T19:01:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.