A Sequentially Fair Mechanism for Multiple Sensitive Attributes
- URL: http://arxiv.org/abs/2309.06627v2
- Date: Sun, 14 Jan 2024 20:27:00 GMT
- Title: A Sequentially Fair Mechanism for Multiple Sensitive Attributes
- Authors: Fran\c{c}ois Hu and Philipp Ratz and Arthur Charpentier
- Abstract summary: In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score.
We propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features.
Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the standard use case of Algorithmic Fairness, the goal is to eliminate
the relationship between a sensitive variable and a corresponding score.
Throughout recent years, the scientific community has developed a host of
definitions and tools to solve this task, which work well in many practical
applications. However, the applicability and effectivity of these tools and
definitions becomes less straightfoward in the case of multiple sensitive
attributes. To tackle this issue, we propose a sequential framework, which
allows to progressively achieve fairness across a set of sensitive features. We
accomplish this by leveraging multi-marginal Wasserstein barycenters, which
extends the standard notion of Strong Demographic Parity to the case with
multiple sensitive characteristics. This method also provides a closed-form
solution for the optimal, sequentially fair predictor, permitting a clear
interpretation of inter-sensitive feature correlations. Our approach seamlessly
extends to approximate fairness, enveloping a framework accommodating the
trade-off between risk and unfairness. This extension permits a targeted
prioritization of fairness improvements for a specific attribute within a set
of sensitive attributes, allowing for a case specific adaptation. A data-driven
estimation procedure for the derived solution is developed, and comprehensive
numerical experiments are conducted on both synthetic and real datasets. Our
empirical findings decisively underscore the practical efficacy of our
post-processing approach in fostering fair decision-making.
Related papers
- Balancing Fairness and Accuracy in Data-Restricted Binary Classification [14.439413517433891]
This paper proposes a framework that models the trade-off between accuracy and fairness under four practical scenarios.
Experiments on three datasets demonstrate the utility of the proposed framework as a tool for quantifying the trade-offs.
arXiv Detail & Related papers (2024-03-12T15:01:27Z) - Fair Supervised Learning with A Simple Random Sampler of Sensitive
Attributes [13.988497790151651]
This work proposes fairness penalties learned by neural networks with a simple random sampler of sensitive attributes for non-discriminatory supervised learning.
We build a computationally efficient group-level in-processing fairness-aware training framework.
Empirical evidence shows that our framework enjoys better utility and fairness measures on popular benchmark data sets than competing methods.
arXiv Detail & Related papers (2023-11-10T04:38:13Z) - dugMatting: Decomposed-Uncertainty-Guided Matting [83.71273621169404]
We propose a decomposed-uncertainty-guided matting algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results.
The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling.
arXiv Detail & Related papers (2023-06-02T11:19:50Z) - Group Fairness with Uncertainty in Sensitive Attributes [34.608332397776245]
A fair predictive model is crucial to mitigate biased decisions against minority groups in high-stakes applications.
We propose a bootstrap-based algorithm that achieves the target level of fairness despite the uncertainty in sensitive attributes.
Our algorithm is applicable to both discrete and continuous sensitive attributes and is effective in real-world classification and regression tasks.
arXiv Detail & Related papers (2023-02-16T04:33:00Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Fairness without the sensitive attribute via Causal Variational
Autoencoder [17.675997789073907]
Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected.
By leveraging recent developments for approximate inference, we propose an approach to fill this gap.
Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy.
arXiv Detail & Related papers (2021-09-10T17:12:52Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.