Impossibility results for fair representations
- URL: http://arxiv.org/abs/2107.03483v1
- Date: Wed, 7 Jul 2021 21:12:55 GMT
- Title: Impossibility results for fair representations
- Authors: Tosca Lechner, Shai Ben-David, Sushant Agarwal and Nivasini
Ananthakrishnan
- Abstract summary: We argue that no representation can guarantee the fairness of classifiers for different tasks trained using it.
More refined notions of fairness, like Odds Equality, cannot be guaranteed by a representation that does not take into account the task specific labeling rule.
- Score: 12.483260526189447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing awareness to fairness in machine learning and the
realization of the central role that data representation has in data processing
tasks, there is an obvious interest in notions of fair data representations.
The goal of such representations is that a model trained on data under the
representation (e.g., a classifier) will be guaranteed to respect some fairness
constraints.
Such representations are useful when they can be fixed for training models on
various different tasks and also when they serve as data filtering between the
raw data (known to the representation designer) and potentially malicious
agents that use the data under the representation to learn predictive models
and make decisions.
A long list of recent research papers strive to provide tools for achieving
these goals.
However, we prove that this is basically a futile effort. Roughly stated, we
prove that no representation can guarantee the fairness of classifiers for
different tasks trained using it; even the basic goal of achieving
label-independent Demographic Parity fairness fails once the marginal data
distribution shifts. More refined notions of fairness, like Odds Equality,
cannot be guaranteed by a representation that does not take into account the
task specific labeling rule with respect to which such fairness will be
evaluated (even if the marginal data distribution is known a priory).
Furthermore, except for trivial cases, no representation can guarantee Odds
Equality fairness for any two different tasks, while allowing accurate label
predictions for both.
While some of our conclusions are intuitive, we formulate (and prove) crisp
statements of such impossibilities, often contrasting impressions conveyed by
many recent works on fair representations.
Related papers
- Learning Interpretable Fair Representations [5.660855954377282]
We propose a framework for learning interpretable fair representations during the representation learning process.
In addition to being interpretable, our representations attain slightly higher accuracy and fairer outcomes in a downstream classification task.
arXiv Detail & Related papers (2024-06-24T15:01:05Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Fair Representation: Guaranteeing Approximate Multiple Group Fairness
for Unknown Tasks [17.231251035416648]
We study whether fair representation can be used to guarantee fairness for unknown tasks and for multiple fairness notions simultaneously.
We prove that, although fair representation might not guarantee fairness for all prediction tasks, it does guarantee fairness for an important subset of tasks.
arXiv Detail & Related papers (2021-09-01T17:29:11Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Group Fairness by Probabilistic Modeling with Latent Fair Decisions [36.20281545470954]
This paper studies learning fair probability distributions from biased data by explicitly modeling a latent variable that represents a hidden, unbiased label.
We aim to achieve demographic parity by enforcing certain independencies in the learned model.
We also show that group fairness guarantees are meaningful only if the distribution used to provide those guarantees indeed captures the real-world data.
arXiv Detail & Related papers (2020-09-18T19:13:23Z) - Learning Smooth and Fair Representations [24.305894478899948]
This paper explores the ability to preemptively remove the correlations between features and sensitive attributes by mapping features to a fair representation space.
Empirically, we find that smoothing the representation distribution provides generalization guarantees of fairness certificates.
We do not observe that smoothing the representation distribution degrades the accuracy of downstream tasks compared to state-of-the-art methods in fair representation learning.
arXiv Detail & Related papers (2020-06-15T21:51:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.