Statistical Equity: A Fairness Classification Objective
- URL: http://arxiv.org/abs/2005.07293v1
- Date: Thu, 14 May 2020 23:19:38 GMT
- Title: Statistical Equity: A Fairness Classification Objective
- Authors: Ninareh Mehrabi, Yuzhong Huang, Fred Morstatter
- Abstract summary: We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
- Score: 6.174903055136084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning systems have been shown to propagate the societal errors of
the past. In light of this, a wealth of research focuses on designing solutions
that are "fair." Even with this abundance of work, there is no singular
definition of fairness, mainly because fairness is subjective and context
dependent. We propose a new fairness definition, motivated by the principle of
equity, that considers existing biases in the data and attempts to make
equitable decisions that account for these previous historical biases. We
formalize our definition of fairness, and motivate it with its appropriate
contexts. Next, we operationalize it for equitable classification. We perform
multiple automatic and human evaluations to show the effectiveness of our
definition and demonstrate its utility for aspects of fairness, such as the
feedback loop.
Related papers
- Subjective fairness in algorithmic decision-support [0.0]
The treatment of fairness in decision-making literature usually involves quantifying fairness using objective measures.
This work takes a critical stance to highlight the limitations of these approaches using sociological insights.
We redefine fairness as a subjective property moving from a top-down to a bottom-up approach.
arXiv Detail & Related papers (2024-06-28T14:37:39Z) - The Unfairness of $\varepsilon$-Fairness [0.0]
We show that if the concept of $varepsilon$-fairness is employed, it can possibly lead to outcomes that are maximally unfair in the real-world context.
We illustrate our findings with two real-world examples: college admissions and credit risk assessment.
arXiv Detail & Related papers (2024-05-15T14:13:35Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Toward A Logical Theory Of Fairness and Bias [12.47276164048813]
We argue for a formal reconstruction of fairness definitions.
We look into three notions: fairness through unawareness, demographic parity and counterfactual fairness.
arXiv Detail & Related papers (2023-06-08T09:18:28Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Towards the Right Kind of Fairness in AI [3.723553383515688]
"Fairness Compass" is a tool which makes identifying the most appropriate fairness metric for a given system a simple, straightforward procedure.
We argue that documenting the reasoning behind the respective decisions in the course of this process can help to build trust from the user.
arXiv Detail & Related papers (2021-02-16T21:12:30Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.