Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach
- URL: http://arxiv.org/abs/2109.08549v5
- Date: Mon, 27 Mar 2023 13:33:16 GMT
- Title: Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach
- Authors: Alessandro Fabris, Andrea Esuli, Alejandro Moreo, Fabrizio Sebastiani
- Abstract summary: We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
- Score: 131.20444904674494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithms and models are increasingly deployed to inform decisions about
people, inevitably affecting their lives. As a consequence, those in charge of
developing these models must carefully evaluate their impact on different
groups of people and favour group fairness, that is, ensure that groups
determined by sensitive demographic attributes, such as race or sex, are not
treated unjustly. To achieve this goal, the availability (awareness) of these
demographic attributes to those evaluating the impact of these models is
fundamental. Unfortunately, collecting and storing these attributes is often in
conflict with industry practices and legislation on data minimisation and
privacy. For this reason, it can be hard to measure the group fairness of
trained models, even from within the companies developing them. In this work,
we tackle the problem of measuring group fairness under unawareness of
sensitive attributes, by using techniques from quantification, a supervised
learning task concerned with directly providing group-level prevalence
estimates (rather than individual-level class labels). We show that
quantification approaches are particularly suited to tackle the
fairness-under-unawareness problem, as they are robust to inevitable
distribution shifts while at the same time decoupling the (desirable) objective
of measuring group fairness from the (undesirable) side effect of allowing the
inference of sensitive attributes of individuals. More in detail, we show that
fairness under unawareness can be cast as a quantification problem and solved
with proven methods from the quantification literature. We show that these
methods outperform previous approaches to measure demographic parity in five
experimental protocols, corresponding to important challenges that complicate
the estimation of classifier fairness under unawareness.
Related papers
- Properties of fairness measures in the context of varying class imbalance and protected group ratios [15.942660279740727]
We study the general properties of fairness measures for changing class and protected group proportions.
We also measure how the probability of achieving perfect fairness changes for varying class imbalance ratios.
Our results show that measures such as Equal Opportunity and Positive Predictive Parity are more sensitive to changes in class imbalance than Accuracy Equality.
arXiv Detail & Related papers (2024-11-13T08:18:03Z) - Group Robust Classification Without Any Group Information [5.053622900542495]
This study contends that current bias-unsupervised approaches to group robustness continue to rely on group information to achieve optimal performance.
bias labels are still crucial for effective model selection, restricting the practicality of these methods in real-world scenarios.
We propose a revised methodology for training and validating debiased models in an entirely bias-unsupervised manner.
arXiv Detail & Related papers (2023-10-28T01:29:18Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - A statistical approach to detect sensitive features in a group fairness
setting [10.087372021356751]
We propose a preprocessing step to address the task of automatically recognizing sensitive features that does not require a trained model to verify unfair results.
Our empirical results attest our hypothesis and show that several features considered as sensitive in the literature do not necessarily entail disparate (unfair) results.
arXiv Detail & Related papers (2023-05-11T17:30:12Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Towards Threshold Invariant Fair Classification [10.317169065327546]
This paper introduces the notion of threshold invariant fairness, which enforces equitable performances across different groups independent of the decision threshold.
Experimental results demonstrate that the proposed methodology is effective to alleviate the threshold sensitivity in machine learning models designed to achieve fairness.
arXiv Detail & Related papers (2020-06-18T16:49:46Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.