Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics
- URL: http://arxiv.org/abs/2012.02394v1
- Date: Fri, 4 Dec 2020 04:12:33 GMT
- Title: Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics
- Authors: Bo Cowgill, Fabrizio Dell'Acqua, Samuel Deng, Daniel Hsu, Nakul Verma
and Augustin Chaintreau
- Abstract summary: We evaluate 8.2 million algorithmic predictions of math performance from $approx$400 AI engineers.
We find that biased predictions are mostly caused by biased training data.
One-third of the benefit of better training data comes through a novel economic mechanism.
- Score: 6.946103498518291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Why do biased predictions arise? What interventions can prevent them? We
evaluate 8.2 million algorithmic predictions of math performance from
$\approx$400 AI engineers, each of whom developed an algorithm under a randomly
assigned experimental condition. Our treatment arms modified programmers'
incentives, training data, awareness, and/or technical knowledge of AI ethics.
We then assess out-of-sample predictions from their algorithms using randomized
audit manipulations of algorithm inputs and ground-truth math performance for
20K subjects. We find that biased predictions are mostly caused by biased
training data. However, one-third of the benefit of better training data comes
through a novel economic mechanism: Engineers exert greater effort and are more
responsive to incentives when given better training data. We also assess how
performance varies with programmers' demographic characteristics, and their
performance on a psychological test of implicit bias (IAT) concerning gender
and careers. We find no evidence that female, minority and low-IAT engineers
exhibit lower bias or discrimination in their code. However, we do find that
prediction errors are correlated within demographic groups, which creates
performance improvements through cross-demographic averaging. Finally, we
quantify the benefits and tradeoffs of practical managerial or policy
interventions such as technical advice, simple reminders, and improved
incentives for decreasing algorithmic bias.
Related papers
- Performativity and Prospective Fairness [4.3512163406552]
We focus on the algorithmic effect on the causally downstream outcome variable.
We show how to predict whether such policies will exacerbate gender inequalities in the labor market.
arXiv Detail & Related papers (2023-10-12T14:18:13Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Behavioral Machine Learning? Computer Predictions of Corporate Earnings
also Overreact [9.566303741482468]
We study the predictions of corporate earnings from several algorithms, notably linear regressions and a popular algorithm called Gradient Boosted Regression Trees (GBRT)
On average, GBRT outperformed both linear regressions and human stock analysts, but it still overreacted to news and did not satisfy rational expectation as normally defined.
Human stock analysts who have been trained in machine learning methods overreact less than traditionally trained analysts.
arXiv Detail & Related papers (2023-03-25T03:06:43Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - ABCinML: Anticipatory Bias Correction in Machine Learning Applications [9.978142416219294]
We propose an anticipatory dynamic learning approach for correcting the algorithm to mitigate bias before it occurs.
Results from experiments over multiple real-world datasets suggest that this approach has promise for anticipatory bias correction.
arXiv Detail & Related papers (2022-06-14T16:26:10Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated
Career Recommendations [8.44485053836748]
We show that a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world.
Using career recommendation as a case study, we build a fair AI career recommender by employing gender debiasing machine learning techniques.
arXiv Detail & Related papers (2021-06-13T23:27:45Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Decision-makers Processing of AI Algorithmic Advice: Automation Bias
versus Selective Adherence [0.0]
Key concern is that human overreliance on algorithms introduces new biases in the human-algorithm interaction.
A second concern regards decision-makers inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes.
We assess these via two studies simulating the use of algorithmic advice in decisions pertaining to the employment of school teachers in the Netherlands.
Our findings of selective, biased adherence belie the promise of neutrality that has propelled algorithm use in the public sector.
arXiv Detail & Related papers (2021-03-03T13:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.