A statistical approach to detect sensitive features in a group fairness
setting
- URL: http://arxiv.org/abs/2305.06994v1
- Date: Thu, 11 May 2023 17:30:12 GMT
- Title: A statistical approach to detect sensitive features in a group fairness
setting
- Authors: Guilherme Dean Pelegrina, Miguel Couceiro, Leonardo Tomazeli Duarte
- Abstract summary: We propose a preprocessing step to address the task of automatically recognizing sensitive features that does not require a trained model to verify unfair results.
Our empirical results attest our hypothesis and show that several features considered as sensitive in the literature do not necessarily entail disparate (unfair) results.
- Score: 10.087372021356751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of machine learning models in decision support systems with high
societal impact raised concerns about unfair (disparate) results for different
groups of people. When evaluating such unfair decisions, one generally relies
on predefined groups that are determined by a set of features that are
considered sensitive. However, such an approach is subjective and does not
guarantee that these features are the only ones to be considered as sensitive
nor that they entail unfair (disparate) outcomes.
In this paper, we propose a preprocessing step to address the task of
automatically recognizing sensitive features that does not require a trained
model to verify unfair results. Our proposal is based on the Hilber-Schmidt
independence criterion, which measures the statistical dependence of variable
distributions. We hypothesize that if the dependence between the label vector
and a candidate is high for a sensitive feature, then the information provided
by this feature will entail disparate performance measures between groups. Our
empirical results attest our hypothesis and show that several features
considered as sensitive in the literature do not necessarily entail disparate
(unfair) results.
Related papers
- Towards Assumption-free Bias Mitigation [47.5131072745805]
We propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation.
Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors.
arXiv Detail & Related papers (2023-07-09T05:55:25Z) - Group Fairness with Uncertainty in Sensitive Attributes [34.608332397776245]
A fair predictive model is crucial to mitigate biased decisions against minority groups in high-stakes applications.
We propose a bootstrap-based algorithm that achieves the target level of fairness despite the uncertainty in sensitive attributes.
Our algorithm is applicable to both discrete and continuous sensitive attributes and is effective in real-world classification and regression tasks.
arXiv Detail & Related papers (2023-02-16T04:33:00Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Information Theoretic Measures for Fairness-aware Feature Selection [27.06618125828978]
We develop a framework for fairness-aware feature selection, based on information theoretic measures for the accuracy and discriminatory impacts of features.
Specifically, our goal is to design a fairness utility score for each feature which quantifies how this feature influences accurate as well as nondiscriminatory decisions.
arXiv Detail & Related papers (2021-06-01T20:11:54Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Towards Threshold Invariant Fair Classification [10.317169065327546]
This paper introduces the notion of threshold invariant fairness, which enforces equitable performances across different groups independent of the decision threshold.
Experimental results demonstrate that the proposed methodology is effective to alleviate the threshold sensitivity in machine learning models designed to achieve fairness.
arXiv Detail & Related papers (2020-06-18T16:49:46Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.