On Disentangled and Locally Fair Representations
- URL: http://arxiv.org/abs/2205.02673v1
- Date: Thu, 5 May 2022 14:26:50 GMT
- Title: On Disentangled and Locally Fair Representations
- Authors: Yaron Gurovich, Sagie Benaim, Lior Wolf
- Abstract summary: We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
- Score: 95.6635227371479
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of performing classification in a manner that is fair
for sensitive groups, such as race and gender. This problem is tackled through
the lens of disentangled and locally fair representations. We learn a locally
fair representation, such that, under the learned representation, the
neighborhood of each sample is balanced in terms of the sensitive attribute.
For instance, when a decision is made to hire an individual, we ensure that the
$K$ most similar hired individuals are racially balanced. Crucially, we ensure
that similar individuals are found based on attributes not correlated to their
race. To this end, we disentangle the embedding space into two representations.
The first of which is correlated with the sensitive attribute while the second
is not. We apply our local fairness objective only to the second, uncorrelated,
representation. Through a set of experiments, we demonstrate the necessity of
both disentangled and local fairness for obtaining fair and accurate
representations. We evaluate our method on real-world settings such as
predicting income and re-incarceration rate and demonstrate the advantage of
our method.
Related papers
- Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - A Novel Approach to Fairness in Automated Decision-Making using
Affective Normalization [2.0178765779788495]
We propose a method for measuring the affective, socially biased, component, thus enabling its removal.
That is, given a decision-making process, these affective measurements remove the affective bias in the decision, rendering it fair across a set of categories defined by the method itself.
arXiv Detail & Related papers (2022-05-02T11:48:53Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Latent Space Smoothing for Individually Fair Representations [12.739528232133495]
We introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to capture the set of similar individuals in the generative latent space.
We employ randomized smoothing to provably map similar individuals close together, in turn ensuring that local robustness verification of the downstream application results in end-to-end fairness certification.
arXiv Detail & Related papers (2021-11-26T18:22:42Z) - Fair Representation: Guaranteeing Approximate Multiple Group Fairness
for Unknown Tasks [17.231251035416648]
We study whether fair representation can be used to guarantee fairness for unknown tasks and for multiple fairness notions simultaneously.
We prove that, although fair representation might not guarantee fairness for all prediction tasks, it does guarantee fairness for an important subset of tasks.
arXiv Detail & Related papers (2021-09-01T17:29:11Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.