Fairness without Demographics through Adversarially Reweighted Learning
- URL: http://arxiv.org/abs/2006.13114v3
- Date: Tue, 3 Nov 2020 18:02:12 GMT
- Title: Fairness without Demographics through Adversarially Reweighted Learning
- Authors: Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost,
Nithum Thain, Xuezhi Wang, Ed H. Chi
- Abstract summary: We train an ML model to improve fairness when we do not even know the protected group memberships.
In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues.
Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets.
- Score: 20.803276801890657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much of the previous machine learning (ML) fairness literature assumes that
protected features such as race and sex are present in the dataset, and relies
upon them to mitigate fairness concerns. However, in practice factors like
privacy and regulation often preclude the collection of protected features, or
their use for training or inference, severely limiting the applicability of
traditional fairness research. Therefore we ask: How can we train an ML model
to improve fairness when we do not even know the protected group memberships?
In this work we address this problem by proposing Adversarially Reweighted
Learning (ARL). In particular, we hypothesize that non-protected features and
task labels are valuable for identifying fairness issues, and can be used to
co-train an adversarial reweighting approach for improving fairness. Our
results show that {ARL} improves Rawlsian Max-Min fairness, with notable AUC
improvements for worst-case protected groups in multiple datasets,
outperforming state-of-the-art alternatives.
Related papers
- Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Fairness-aware Federated Minimax Optimization with Convergence Guarantee [10.727328530242461]
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
arXiv Detail & Related papers (2023-07-10T08:45:58Z) - Fair Spatial Indexing: A paradigm for Group Spatial Fairness [6.640563753223598]
We propose techniques to mitigate location bias in machine learning.
We focus on spatial group fairness and we propose a spatial indexing algorithm that accounts for fairness.
arXiv Detail & Related papers (2023-02-05T05:15:11Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - On the Privacy Risks of Algorithmic Fairness [9.429448411561541]
We study the privacy risks of group fairness through the lens of membership inference attacks.
We show that fairness comes at the cost of privacy, and this cost is not distributed equally.
arXiv Detail & Related papers (2020-11-07T09:15:31Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.