Towards Equity and Algorithmic Fairness in Student Grade Prediction
- URL: http://arxiv.org/abs/2105.06604v1
- Date: Fri, 14 May 2021 01:12:01 GMT
- Title: Towards Equity and Algorithmic Fairness in Student Grade Prediction
- Authors: Weijie Jiang, Zachary A. Pardos
- Abstract summary: This work addresses equity of educational outcome and fairness of AI with respect to race.
We trial several strategies for both label and instance balancing to attempt to minimize differences in algorithm performance with respect to race.
We find that an adversarial learning approach, combined with grade label balancing, achieved by far the fairest results.
- Score: 2.9189409618561966
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Equity of educational outcome and fairness of AI with respect to race have
been topics of increasing importance in education. In this work, we address
both with empirical evaluations of grade prediction in higher education, an
important task to improve curriculum design, plan interventions for academic
support, and offer course guidance to students. With fairness as the aim, we
trial several strategies for both label and instance balancing to attempt to
minimize differences in algorithm performance with respect to race. We find
that an adversarial learning approach, combined with grade label balancing,
achieved by far the fairest results. With equity of educational outcome as the
aim, we trial strategies for boosting predictive performance on historically
underserved groups and find success in sampling those groups in inverse
proportion to their historic outcomes. With AI-infused technology supports
increasingly prevalent on campuses, our methodologies fill a need for
frameworks to consider performance trade-offs with respect to sensitive student
attributes and allow institutions to instrument their AI resources in ways that
are attentive to equity and fairness.
Related papers
- FairAIED: Navigating Fairness, Bias, and Ethics in Educational AI Applications [2.612585751318055]
The integration of Artificial Intelligence into education has transformative potential, providing tailored learning experiences and creative instructional approaches.
However, the inherent biases in AI algorithms hinder this improvement by unintentionally perpetuating prejudice against specific demographics.
This survey delves deeply into the developing topic of algorithmic fairness in educational contexts.
It identifies the common forms of biases, such as data-related, algorithmic, and user-interaction, that fundamentally undermine the accomplishment of fairness in AI teaching aids.
arXiv Detail & Related papers (2024-07-26T13:59:20Z) - A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - The Rise of Artificial Intelligence in Educational Measurement: Opportunities and Ethical Challenges [2.569083526579529]
AI in education raises ethical concerns regarding validity, reliability, transparency, fairness, and equity.
Various stakeholders, including educators, policymakers, and organizations, have developed guidelines to ensure ethical AI use in education.
In this paper, a diverse group of AIME members examines the ethical implications of AI-powered tools in educational measurement.
arXiv Detail & Related papers (2024-06-27T05:28:40Z) - Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - Implementing Learning Principles with a Personal AI Tutor: A Case Study [2.94944680995069]
This research demonstrates the ability of personal AI tutors to model human learning processes and effectively enhance academic performance.
By integrating AI tutors into their programs, educators can offer students personalized learning experiences grounded in the principles of learning sciences.
arXiv Detail & Related papers (2023-09-10T15:35:47Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Evaluation of group fairness measures in student performance prediction
problems [12.502377311068757]
We evaluate different group fairness measures for student performance prediction problems on various educational datasets and fairness-aware learning models.
Our study shows that the choice of the fairness measure is important, likewise for the choice of the grade threshold.
arXiv Detail & Related papers (2022-08-22T22:06:08Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Counterfactual Representation Learning with Balancing Weights [74.67296491574318]
Key to causal inference with observational data is achieving balance in predictive features associated with each treatment type.
Recent literature has explored representation learning to achieve this goal.
We develop an algorithm for flexible, scalable and accurate estimation of causal effects.
arXiv Detail & Related papers (2020-10-23T19:06:03Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.