Group-blind optimal transport to group parity and its constrained
variants
- URL: http://arxiv.org/abs/2310.11407v1
- Date: Tue, 17 Oct 2023 17:14:07 GMT
- Title: Group-blind optimal transport to group parity and its constrained
variants
- Authors: Quan Zhou, Jakub Marecek
- Abstract summary: We design a single group-blind projection map that aligns the feature distributions of both groups in the source data.
We assume that the source data are unbiased representation of the population.
We present numerical results on synthetic data and real data.
- Score: 7.92637080020358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness holds a pivotal role in the realm of machine learning, particularly
when it comes to addressing groups categorised by sensitive attributes, e.g.,
gender, race. Prevailing algorithms in fair learning predominantly hinge on
accessibility or estimations of these sensitive attributes, at least in the
training process. We design a single group-blind projection map that aligns the
feature distributions of both groups in the source data, achieving
(demographic) group parity, without requiring values of the protected attribute
for individual samples in the computation of the map, as well as its use.
Instead, our approach utilises the feature distributions of the privileged and
unprivileged groups in a boarder population and the essential assumption that
the source data are unbiased representation of the population. We present
numerical results on synthetic data and real data.
Related papers
- Dataset Representativeness and Downstream Task Fairness [24.570493924073524]
We demonstrate that there is a natural tension between dataset representativeness and group-fairness of classifiers trained on that dataset.
We also find that over-sampling underrepresented groups can result in classifiers which exhibit greater bias to those groups.
arXiv Detail & Related papers (2024-06-28T18:11:16Z) - A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - A Canonical Data Transformation for Achieving Inter- and Within-group Fairness [17.820200610132265]
We introduce a formal definition of within-group fairness that maintains fairness among individuals from within the same group.
We propose a pre-processing framework to meet both inter- and within-group fairness criteria with little compromise in accuracy.
We apply this framework to the COMPAS risk assessment and Law School datasets and compare its performance to two regularization-based methods.
arXiv Detail & Related papers (2023-10-23T17:00:20Z) - Affinity Clustering Framework for Data Debiasing Using Pairwise
Distribution Discrepancy [10.184056098238765]
Group imbalance, resulting from inadequate or unrepresentative data collection methods, is a primary cause of representation bias in datasets.
This paper presents MASC, a data augmentation approach that leverages affinity clustering to balance the representation of non-protected and protected groups of a target dataset.
arXiv Detail & Related papers (2023-06-02T17:18:20Z) - Leveraging Structure for Improved Classification of Grouped Biased Data [8.121462458089143]
We consider semi-supervised binary classification for applications in which data points are naturally grouped.
We derive a semi-supervised algorithm that explicitly leverages the structure to learn an optimal, group-aware, probability-outputd classifier.
arXiv Detail & Related papers (2022-12-07T15:18:21Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Towards Group Robustness in the presence of Partial Group Labels [61.33713547766866]
spurious correlations between input samples and the target labels wrongly direct the neural network predictions.
We propose an algorithm that optimize for the worst-off group assignments from a constraint set.
We show improvements in the minority group's performance while preserving overall aggregate accuracy across groups.
arXiv Detail & Related papers (2022-01-10T22:04:48Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.