Inside the Black Box: Detecting and Mitigating Algorithmic Bias across Racialized Groups in College Student-Success Prediction
- URL: http://arxiv.org/abs/2301.03784v2
- Date: Thu, 11 Jul 2024 17:19:31 GMT
- Title: Inside the Black Box: Detecting and Mitigating Algorithmic Bias across Racialized Groups in College Student-Success Prediction
- Authors: Denisa Gándara, Hadis Anahideh, Matthew P. Ison, Lorenzo Picchiarini,
- Abstract summary: We examine how the accuracy of college student success predictions differs between racialized groups, signaling algorithmic bias.
We demonstrate how models incorporating commonly used features to predict college-student success are less accurate when predicting success for racially minoritized students.
Common approaches to mitigating algorithmic bias are generally ineffective at eliminating disparities in prediction outcomes and accuracy between racialized groups.
- Score: 1.124958340749622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Colleges and universities are increasingly turning to algorithms that predict college-student success to inform various decisions, including those related to admissions, budgeting, and student-success interventions. Because predictive algorithms rely on historical data, they capture societal injustices, including racism. In this study, we examine how the accuracy of college student success predictions differs between racialized groups, signaling algorithmic bias. We also evaluate the utility of leading bias-mitigating techniques in addressing this bias. Using nationally representative data from the Education Longitudinal Study of 2002 and various machine learning modeling approaches, we demonstrate how models incorporating commonly used features to predict college-student success are less accurate when predicting success for racially minoritized students. Common approaches to mitigating algorithmic bias are generally ineffective at eliminating disparities in prediction outcomes and accuracy between racialized groups.
Related papers
- Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Assessing Gender Bias in Predictive Algorithms using eXplainable AI [1.9798034349981162]
Predictive algorithms have a powerful potential to offer benefits in areas as varied as medicine or education.
They can inherit the bias and prejudices present in humans.
The outcomes can systematically repeat errors that create unfair results.
arXiv Detail & Related papers (2022-03-19T07:47:45Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Decision Tree-Based Predictive Models for Academic Achievement Using
College Students' Support Networks [0.16311150636417257]
The data, called Ties data, included students' demographic and support network information.
For White students, different types of educational support were important in predicting academic achievement.
For non-White students, different types of emotional support were important in predicting academic achievement.
arXiv Detail & Related papers (2021-08-31T16:09:56Z) - Avoiding bias when inferring race using name-based approaches [0.8543368663496084]
We use information from the U.S. Census and mortgage applications to infer the race of U.S. affiliated authors in the Web of Science.
Our results demonstrate that the validity of name based inference varies by race/ethnicity and that threshold approaches underestimate Black authors and overestimate White authors.
arXiv Detail & Related papers (2021-04-14T08:36:22Z) - Decision-makers Processing of AI Algorithmic Advice: Automation Bias
versus Selective Adherence [0.0]
Key concern is that human overreliance on algorithms introduces new biases in the human-algorithm interaction.
A second concern regards decision-makers inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes.
We assess these via two studies simulating the use of algorithmic advice in decisions pertaining to the employment of school teachers in the Netherlands.
Our findings of selective, biased adherence belie the promise of neutrality that has propelled algorithm use in the public sector.
arXiv Detail & Related papers (2021-03-03T13:10:50Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.