The Disparate Impact of Uncertainty: Affirmative Action vs. Affirmative
Information
- URL: http://arxiv.org/abs/2102.10019v5
- Date: Thu, 8 Feb 2024 18:58:55 GMT
- Title: The Disparate Impact of Uncertainty: Affirmative Action vs. Affirmative
Information
- Authors: Claire Lazar Reich
- Abstract summary: We show that groups with higher average outcomes are typically assigned higher false positive rates.
We explain why the intuitive remedy to omit demographic variables from datasets does not correct it.
Instead of data omission, this paper examines how data enrichment can broaden access to opportunity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Critical decisions like hiring, college admissions, and loan approvals are
guided by predictions made in the presence of uncertainty. While uncertainty
imparts errors across all demographic groups, this paper shows that the types
of errors vary systematically: Groups with higher average outcomes are
typically assigned higher false positive rates, while those with lower average
outcomes are assigned higher false negative rates. We characterize the
conditions that give rise to this disparate impact and explain why the
intuitive remedy to omit demographic variables from datasets does not correct
it. Instead of data omission, this paper examines how data enrichment can
broaden access to opportunity. The strategy, which we call "Affirmative
Information," could stand as an alternative to Affirmative Action.
Related papers
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Fighting Sampling Bias: A Framework for Training and Evaluating Credit Scoring Models [2.918530881730374]
This paper addresses the adverse effect of sampling bias on model training and evaluation.
We propose bias-aware self-learning and a reject inference framework for scorecard evaluation.
Our results suggest a profit improvement of about eight percent, when using Bayesian evaluation to decide on acceptance rates.
arXiv Detail & Related papers (2024-07-17T20:59:54Z) - De-Biasing Models of Biased Decisions: A Comparison of Methods Using Mortgage Application Data [0.0]
This paper adds counterfactual (simulated) ethnic bias to real data on mortgage application decisions.
It shows that this bias is replicated by a machine learning model (XGBoost) even when ethnicity is not used as a predictive variable.
arXiv Detail & Related papers (2024-05-01T23:46:44Z) - Mitigating Label Bias in Machine Learning: Fairness through Confident
Learning [22.031325797588476]
Discrimination can occur when the underlying unbiased labels are overwritten by an agent with potential bias.
In this paper, we demonstrate that it is possible to eliminate bias by filtering the fairest instances within the framework of confident learning.
arXiv Detail & Related papers (2023-12-14T08:55:38Z) - Unbiased Decisions Reduce Regret: Adversarial Domain Adaptation for the
Bank Loan Problem [21.43618923706602]
We focus on a class of problems that share a common feature: the true label is only observed when a data point is assigned a positive label by the principal.
We introduce adversarial optimism (AdOpt) to directly address bias in the training set using adversarial domain adaptation.
arXiv Detail & Related papers (2023-08-15T21:35:44Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Leveraging Administrative Data for Bias Audits: Assessing Disparate
Coverage with Mobility Data for COVID-19 Policy [61.60099467888073]
We show how linking administrative data can enable auditing mobility data for bias.
We show that older and non-white voters are less likely to be captured by mobility data.
We show that allocating public health resources based on such mobility data could disproportionately harm high-risk elderly and minority groups.
arXiv Detail & Related papers (2020-11-14T02:04:14Z) - Unfairness Discovery and Prevention For Few-Shot Regression [9.95899391250129]
We study fairness in supervised few-shot meta-learning models sensitive to discrimination (or bias) in historical data.
A machine learning model trained based on biased data tends to make unfair predictions for users from minority groups.
arXiv Detail & Related papers (2020-09-23T22:34:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.