Fairness Through Counterfactual Utilities
- URL: http://arxiv.org/abs/2108.05315v1
- Date: Wed, 11 Aug 2021 16:51:27 GMT
- Title: Fairness Through Counterfactual Utilities
- Authors: Jack Blandin, Ian Kash
- Abstract summary: Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group fairness definitions such as Demographic Parity and Equal Opportunity
make assumptions about the underlying decision-problem that restrict them to
classification problems. Prior work has translated these definitions to other
machine learning environments, such as unsupervised learning and reinforcement
learning, by implementing their closest mathematical equivalent. As a result,
there are numerous bespoke interpretations of these definitions. Instead, we
provide a generalized set of group fairness definitions that unambiguously
extend to all machine learning environments while still retaining their
original fairness notions. We derive two fairness principles that enable such a
generalized framework. First, our framework measures outcomes in terms of
utilities, rather than predictions, and does so for both the decision-algorithm
and the individual. Second, our framework considers counterfactual outcomes,
rather than just observed outcomes, thus preventing loopholes where fairness
criteria are satisfied through self-fulfilling prophecies. We provide concrete
examples of how our counterfactual utility fairness framework resolves known
fairness issues in classification, clustering, and reinforcement learning
problems. We also show that many of the bespoke interpretations of Demographic
Parity and Equal Opportunity fit nicely as special cases of our framework.
Related papers
- The Flawed Foundations of Fair Machine Learning [0.0]
We show that there is a trade-off between statistically accurate outcomes and group similar outcomes in any data setting where group disparities exist.
We introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group similar outcomes.
arXiv Detail & Related papers (2023-06-02T10:07:12Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Representation: Guaranteeing Approximate Multiple Group Fairness
for Unknown Tasks [17.231251035416648]
We study whether fair representation can be used to guarantee fairness for unknown tasks and for multiple fairness notions simultaneously.
We prove that, although fair representation might not guarantee fairness for all prediction tasks, it does guarantee fairness for an important subset of tasks.
arXiv Detail & Related papers (2021-09-01T17:29:11Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Everything is Relative: Understanding Fairness with Optimal Transport [1.160208922584163]
We present an optimal transport-based approach to fairness that offers an interpretable and quantifiable exploration of bias and its structure.
Our framework is able to recover well known examples of algorithmic discrimination, detect unfairness when other metrics fail, and explore recourse opportunities.
arXiv Detail & Related papers (2021-02-20T13:57:53Z) - The Measure and Mismeasure of Fairness [6.6697126372463345]
We argue that the equitable design of algorithms requires grappling with their context-specific consequences.
We offer strategies to ensure algorithms are better aligned with policy goals.
arXiv Detail & Related papers (2018-07-31T18:38:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.