On the Fairness ROAD: Robust Optimization for Adversarial Debiasing
- URL: http://arxiv.org/abs/2310.18413v1
- Date: Fri, 27 Oct 2023 18:08:42 GMT
- Title: On the Fairness ROAD: Robust Optimization for Adversarial Debiasing
- Authors: Vincent Grari, Thibault Laugel, Tatsunori Hashimoto, Sylvain Lamprier,
Marcin Detyniecki
- Abstract summary: ROAD is designed to prioritize inputs that are likely to be locally unfair.
It achieves dominance with respect to local fairness and accuracy for a given global fairness level.
It also enhances fairness generalization under distribution shift.
- Score: 46.495095664915986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of algorithmic fairness, significant attention has been put on
group fairness criteria, such as Demographic Parity and Equalized Odds.
Nevertheless, these objectives, measured as global averages, have raised
concerns about persistent local disparities between sensitive groups. In this
work, we address the problem of local fairness, which ensures that the
predictor is unbiased not only in terms of expectations over the whole
population, but also within any subregion of the feature space, unknown at
training time. To enforce this objective, we introduce ROAD, a novel approach
that leverages the Distributionally Robust Optimization (DRO) framework within
a fair adversarial learning objective, where an adversary tries to infer the
sensitive attribute from the predictions. Using an instance-level re-weighting
strategy, ROAD is designed to prioritize inputs that are likely to be locally
unfair, i.e. where the adversary faces the least difficulty in reconstructing
the sensitive attribute. Numerical experiments demonstrate the effectiveness of
our method: it achieves Pareto dominance with respect to local fairness and
accuracy for a given global fairness level across three standard datasets, and
also enhances fairness generalization under distribution shift.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Achieving Fairness Across Local and Global Models in Federated Learning [9.902848777262918]
This study introduces textttEquiFL, a novel approach designed to enhance both local and global fairness in Federated Learning environments.
textttEquiFL incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness.
We demonstrate that textttEquiFL not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness.
arXiv Detail & Related papers (2024-06-24T19:42:16Z) - Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group Fairness [19.183108418687226]
We develop algorithmic fairness practices that leverage performativity to achieve stronger group fairness guarantees in social classification problems.
A crucial benefit of this approach is that it is possible to resolve the incompatibilities between conflicting group fairness definitions.
arXiv Detail & Related papers (2024-05-30T19:46:47Z) - Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - Mitigating Group Bias in Federated Learning for Heterogeneous Devices [1.181206257787103]
Federated Learning is emerging as a privacy-preserving model training approach in distributed edge applications.
Our work proposes a group-fair FL framework that minimizes group-bias while preserving privacy and without resource utilization overhead.
arXiv Detail & Related papers (2023-09-13T16:53:48Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Unified Group Fairness on Federated Learning [22.143427873780404]
Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on private data from distributed clients.
Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race)
We propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate.
arXiv Detail & Related papers (2021-11-09T08:21:38Z) - Fairness without the sensitive attribute via Causal Variational
Autoencoder [17.675997789073907]
Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected.
By leveraging recent developments for approximate inference, we propose an approach to fill this gap.
Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy.
arXiv Detail & Related papers (2021-09-10T17:12:52Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.