Evaluation of group fairness measures in student performance prediction
problems
- URL: http://arxiv.org/abs/2208.10625v1
- Date: Mon, 22 Aug 2022 22:06:08 GMT
- Title: Evaluation of group fairness measures in student performance prediction
problems
- Authors: Tai Le Quy, Thi Huyen Nguyen, Gunnar Friege and Eirini Ntoutsi
- Abstract summary: We evaluate different group fairness measures for student performance prediction problems on various educational datasets and fairness-aware learning models.
Our study shows that the choice of the fairness measure is important, likewise for the choice of the grade threshold.
- Score: 12.502377311068757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting students' academic performance is one of the key tasks of
educational data mining (EDM). Traditionally, the high forecasting quality of
such models was deemed critical. More recently, the issues of fairness and
discrimination w.r.t. protected attributes, such as gender or race, have gained
attention. Although there are several fairness-aware learning approaches in
EDM, a comparative evaluation of these measures is still missing. In this
paper, we evaluate different group fairness measures for student performance
prediction problems on various educational datasets and fairness-aware learning
models. Our study shows that the choice of the fairness measure is important,
likewise for the choice of the grade threshold.
Related papers
- F-Eval: Assessing Fundamental Abilities with Refined Evaluation Methods [102.98899881389211]
We propose F-Eval, a bilingual evaluation benchmark to evaluate the fundamental abilities, including expression, commonsense and logic.
For reference-free subjective tasks, we devise new evaluation methods, serving as alternatives to scoring by API models.
arXiv Detail & Related papers (2024-01-26T13:55:32Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Is Your Model "MADD"? A Novel Metric to Evaluate Algorithmic Fairness
for Predictive Student Models [0.0]
We propose a novel metric, the Model Absolute Density Distance (MADD), to analyze models' discriminatory behaviors.
We evaluate our approach on the common task of predicting student success in online courses, using several common predictive classification models.
arXiv Detail & Related papers (2023-05-24T16:55:49Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - A review of clustering models in educational data science towards
fairness-aware learning [14.051419173519308]
This chapter comprehensively surveys clustering models and their fairness in educational activities.
We especially focus on investigating the fair clustering models applied in educational activities.
These models are believed to be practical tools for analyzing students' data and ensuring fairness in EDS.
arXiv Detail & Related papers (2023-01-09T15:18:51Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Towards Equity and Algorithmic Fairness in Student Grade Prediction [2.9189409618561966]
This work addresses equity of educational outcome and fairness of AI with respect to race.
We trial several strategies for both label and instance balancing to attempt to minimize differences in algorithm performance with respect to race.
We find that an adversarial learning approach, combined with grade label balancing, achieved by far the fairest results.
arXiv Detail & Related papers (2021-05-14T01:12:01Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.