Fairness Improvement with Multiple Protected Attributes: How Far Are We?
- URL: http://arxiv.org/abs/2308.01923v3
- Date: Thu, 4 Apr 2024 16:54:25 GMT
- Title: Fairness Improvement with Multiple Protected Attributes: How Far Are We?
- Authors: Zhenpeng Chen, Jie M. Zhang, Federica Sarro, Mark Harman,
- Abstract summary: This paper conducts an extensive study of fairness improvement regarding multiple protected attributes.
We analyze the effectiveness of 11 state-of-the-art fairness improvement methods.
We find little difference in accuracy loss when considering single and multiple protected attributes.
- Score: 27.86281559193479
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing research mostly improves the fairness of Machine Learning (ML) software regarding a single protected attribute at a time, but this is unrealistic given that many users have multiple protected attributes. This paper conducts an extensive study of fairness improvement regarding multiple protected attributes, covering 11 state-of-the-art fairness improvement methods. We analyze the effectiveness of these methods with different datasets, metrics, and ML models when considering multiple protected attributes. The results reveal that improving fairness for a single protected attribute can largely decrease fairness regarding unconsidered protected attributes. This decrease is observed in up to 88.3% of scenarios (57.5% on average). More surprisingly, we find little difference in accuracy loss when considering single and multiple protected attributes, indicating that accuracy can be maintained in the multiple-attribute paradigm. However, the effect on F1-score when handling two protected attributes is about twice that of a single attribute. This has important implications for future fairness research: reporting only accuracy as the ML performance metric, which is currently common in the literature, is inadequate.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Estimating and Implementing Conventional Fairness Metrics With
Probabilistic Protected Features [7.457585597068654]
We develop methods for measuring and reducing violations in a setting with limited attribute labels.
We show our measurement method can bound the true disparity up to 5.5x tighter than previous methods in these applications.
arXiv Detail & Related papers (2023-10-02T22:30:25Z) - Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting
Off-the-Shelf Models [9.01924639426239]
Model fairness (a.k.a. bias) has become one of the most critical problems in a wide range of AI applications.
We propose a novel Multi- Dimension Fairness framework, namely Muffin, which includes an automatic tool to unite off-the-shelf models to improve the fairness on multiple attributes simultaneously.
arXiv Detail & Related papers (2023-08-26T02:04:10Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Multiple Attribute Fairness: Application to Fraud Detection [0.0]
We propose a fairness measure relaxing the equality conditions in the popular equal odds fairness regime for classification.
We design an iterative-agnostic, grid-based model that calibrates the outcomes per sensitive attribute value to conform to the measure.
arXiv Detail & Related papers (2022-07-28T19:19:45Z) - Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious
Attribute Estimation [72.92329724600631]
We propose a pseudo-attribute-based algorithm, coined Spread Spurious Attribute, for improving the worst-group accuracy.
Our experiments on various benchmark datasets show that our algorithm consistently outperforms the baseline methods.
We also demonstrate that the proposed SSA can achieve comparable performances to methods using full (100%) spurious attribute supervision.
arXiv Detail & Related papers (2022-04-05T09:08:30Z) - Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute
Inference Attacks [0.5801044612920815]
We propose Dikaios, a privacy auditing tool for fairness algorithms for model builders.
We show that our attribute inference attacks with adaptive prediction threshold significantly outperform prior attacks.
arXiv Detail & Related papers (2022-02-04T17:19:59Z) - Fairness without Demographics through Adversarially Reweighted Learning [20.803276801890657]
We train an ML model to improve fairness when we do not even know the protected group memberships.
In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues.
Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets.
arXiv Detail & Related papers (2020-06-23T16:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.