Fair Representation: Guaranteeing Approximate Multiple Group Fairness
for Unknown Tasks
- URL: http://arxiv.org/abs/2109.00545v1
- Date: Wed, 1 Sep 2021 17:29:11 GMT
- Title: Fair Representation: Guaranteeing Approximate Multiple Group Fairness
for Unknown Tasks
- Authors: Xudong Shen, Yongkang Wong, Mohan Kankanhalli
- Abstract summary: We study whether fair representation can be used to guarantee fairness for unknown tasks and for multiple fairness notions simultaneously.
We prove that, although fair representation might not guarantee fairness for all prediction tasks, it does guarantee fairness for an important subset of tasks.
- Score: 17.231251035416648
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by scenarios where data is used for diverse prediction tasks, we
study whether fair representation can be used to guarantee fairness for unknown
tasks and for multiple fairness notions simultaneously. We consider seven group
fairness notions that cover the concepts of independence, separation, and
calibration. Against the backdrop of the fairness impossibility results, we
explore approximate fairness. We prove that, although fair representation might
not guarantee fairness for all prediction tasks, it does guarantee fairness for
an important subset of tasks -- the tasks for which the representation is
discriminative. Specifically, all seven group fairness notions are linearly
controlled by fairness and discriminativeness of the representation. When an
incompatibility exists between different fairness notions, fair and
discriminative representation hits the sweet spot that approximately satisfies
all notions. Motivated by our theoretical findings, we propose to learn both
fair and discriminative representations using pretext loss which
self-supervises learning, and Maximum Mean Discrepancy as a fair regularizer.
Experiments on tabular, image, and face datasets show that using the learned
representation, downstream predictions that we are unaware of when learning the
representation indeed become fairer for seven group fairness notions, and the
fairness guarantees computed from our theoretical results are all valid.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - Impossibility results for fair representations [12.483260526189447]
We argue that no representation can guarantee the fairness of classifiers for different tasks trained using it.
More refined notions of fairness, like Odds Equality, cannot be guaranteed by a representation that does not take into account the task specific labeling rule.
arXiv Detail & Related papers (2021-07-07T21:12:55Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Machine learning fairness notions: Bridging the gap with real-world
applications [4.157415305926584]
Fairness emerged as an important requirement to guarantee that Machine Learning predictive systems do not discriminate against specific individuals or entire sub-populations.
This paper is a survey that illustrates the subtleties between fairness notions through a large number of examples and scenarios.
arXiv Detail & Related papers (2020-06-30T13:01:06Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.