Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness
- URL: http://arxiv.org/abs/2310.19691v1
- Date: Mon, 30 Oct 2023 16:07:57 GMT
- Title: Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness
- Authors: Jacy Reese Anthis and Victor Veitch
- Abstract summary: We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
- Score: 15.83823345486604
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Counterfactual fairness requires that a person would have been classified in
the same way by an AI or other algorithmic system if they had a different
protected class, such as a different race or gender. This is an intuitive
standard, as reflected in the U.S. legal system, but its use is limited because
counterfactuals cannot be directly observed in real-world data. On the other
hand, group fairness metrics (e.g., demographic parity or equalized odds) are
less intuitive but more readily observed. In this paper, we use $\textit{causal
context}$ to bridge the gaps between counterfactual fairness, robust
prediction, and group fairness. First, we motivate counterfactual fairness by
showing that there is not necessarily a fundamental trade-off between fairness
and accuracy because, under plausible conditions, the counterfactually fair
predictor is in fact accuracy-optimal in an unbiased target distribution.
Second, we develop a correspondence between the causal graph of the
data-generating process and which, if any, group fairness metrics are
equivalent to counterfactual fairness. Third, we show that in three common
fairness contexts$\unicode{x2013}$measurement error, selection on label, and
selection on predictors$\unicode{x2013}$counterfactual fairness is equivalent
to demographic parity, equalized odds, and calibration, respectively.
Counterfactual fairness can sometimes be tested by measuring relatively simple
group fairness metrics.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - Counterfactual Fairness for Predictions using Generative Adversarial
Networks [28.65556399421874]
We develop a novel deep neural network called Generative Counterfactual Fairness Network (GCFN) for making predictions under counterfactual fairness.
Our method is mathematically guaranteed to ensure the notion of counterfactual fairness.
arXiv Detail & Related papers (2023-10-26T17:58:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Counterfactual Fairness Is Basically Demographic Parity [0.0]
Making fair decisions is crucial to ethically implementing machine learning algorithms in social settings.
We show that an algorithm which satisfies counterfactual fairness also satisfies demographic parity.
We formalize a concrete fairness goal: to preserve the order of individuals within protected groups.
arXiv Detail & Related papers (2022-08-07T23:38:59Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Accurate Fairness: Improving Individual Fairness without Trading
Accuracy [4.0415037006237595]
We propose a new fairness criterion, accurate fairness, to align individual fairness with accuracy.
We prove that accurate fairness also implies typical group fairness criteria over a union of similar sub-populations.
To the best of our knowledge, this is the first time that a Siamese approach is adapted for bias mitigation.
arXiv Detail & Related papers (2022-05-18T03:24:16Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.