Mitigating Biases in Student Performance Prediction via Attention-Based
Personalized Federated Learning
- URL: http://arxiv.org/abs/2208.01182v1
- Date: Tue, 2 Aug 2022 00:22:20 GMT
- Title: Mitigating Biases in Student Performance Prediction via Attention-Based
Personalized Federated Learning
- Authors: Yun-Wei Chu, Seyyedali Hosseinalipour, Elizabeth Tenorio, Laura Cruz,
Kerrie Douglas, Andrew Lan, Christopher Brinton
- Abstract summary: Traditional learning-based approaches to student modeling generalize poorly to underrepresented student groups due to biases in data availability.
We propose a methodology for predicting student performance from their online learning activities that optimize inference accuracy over different demographic groups such as race and gender.
- Score: 7.040747348755578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional learning-based approaches to student modeling generalize poorly
to underrepresented student groups due to biases in data availability. In this
paper, we propose a methodology for predicting student performance from their
online learning activities that optimizes inference accuracy over different
demographic groups such as race and gender. Building upon recent foundations in
federated learning, in our approach, personalized models for individual student
subgroups are derived from a global model aggregated across all student models
via meta-gradient updates that account for subgroup heterogeneity. To learn
better representations of student activity, we augment our approach with a
self-supervised behavioral pretraining methodology that leverages multiple
modalities of student behavior (e.g., visits to lecture videos and
participation on forums), and include a neural network attention mechanism in
the model aggregation stage. Through experiments on three real-world datasets
from online courses, we demonstrate that our approach obtains substantial
improvements over existing student modeling baselines in predicting student
learning outcomes for all subgroups. Visual analysis of the resulting student
embeddings confirm that our personalization methodology indeed identifies
different activity patterns within different subgroups, consistent with its
stronger inference ability compared with the baselines.
Related papers
- An Active Learning Framework for Inclusive Generation by Large Language Models [32.16984263644299]
Large Language Models (LLMs) generate text representative of diverse sub-populations.
We propose a novel clustering-based active learning framework, enhanced with knowledge distillation.
We construct two new datasets in tandem with model training, showing a performance improvement of 2%-10% over baseline models.
arXiv Detail & Related papers (2024-10-17T15:09:35Z) - Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - RIGL: A Unified Reciprocal Approach for Tracing the Independent and Group Learning Processes [22.379764500005503]
We propose RIGL, a unified Reciprocal model to trace knowledge states at both the individual and group levels.
In this paper, we introduce a time frame-aware reciprocal embedding module to concurrently model both student and group response interactions.
We design a relation-guided temporal attentive network, comprised of dynamic graph modeling coupled with a temporal self-attention mechanism.
arXiv Detail & Related papers (2024-06-18T10:16:18Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Multi-Layer Personalized Federated Learning for Mitigating Biases in Student Predictive Analytics [8.642174401125263]
We propose a Multi-Layer Personalized Federated Learning (MLPFL) methodology to optimize inference accuracy over different layers of student grouping criteria.
In our approach, personalized models for individual student subgroups are derived from a global model.
Experiments on three real-world online course datasets show significant improvements achieved by our approach over existing student modeling benchmarks.
arXiv Detail & Related papers (2022-12-05T17:27:28Z) - Predicting student performance using sequence classification with
time-based windows [1.5836913530330787]
We show that accurate predictive models can be built based on sequential patterns derived from students' behavioral data.
We present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models.
The results of our improved sequence classification technique are capable of predicting student performance with high levels of accuracy, reaching 90 percent for course-specific models.
arXiv Detail & Related papers (2022-08-16T13:46:39Z) - Representation Matters: Assessing the Importance of Subgroup Allocations
in Training Data [85.43008636875345]
We show that diverse representation in training data is key to increasing subgroup performances and achieving population level objectives.
Our analysis and experiments describe how dataset compositions influence performance and provide constructive results for using trends in existing data, alongside domain knowledge, to help guide intentional, objective-aware dataset design.
arXiv Detail & Related papers (2021-03-05T00:27:08Z) - Collaborative Group Learning [42.31194030839819]
Collaborative learning has successfully applied knowledge transfer to guide a pool of small student networks towards robust local minima.
Previous approaches typically struggle with drastically aggravated student homogenization when the number of students rises.
We propose Collaborative Group Learning, an efficient framework that aims to diversify the feature representation and conduct an effective regularization.
arXiv Detail & Related papers (2020-09-16T14:34:39Z) - Revealing the Hidden Patterns: A Comparative Study on Profiling
Subpopulations of MOOC Students [61.58283466715385]
Massive Open Online Courses (MOOCs) exhibit a remarkable heterogeneity of students.
The advent of complex "big data" from MOOC platforms is a challenging yet rewarding opportunity to deeply understand how students are engaged in MOOCs.
We report on clustering analysis of student activities and comparative analysis on both behavioral patterns and demographical patterns between student subpopulations in the MOOC.
arXiv Detail & Related papers (2020-08-12T10:38:50Z) - Three Approaches for Personalization with Applications to Federated
Learning [68.19709953755238]
We present a systematic learning-theoretic study of personalization.
We provide learning-theoretic guarantees and efficient algorithms for which we also demonstrate the performance.
All of our algorithms are model-agnostic and work for any hypothesis class.
arXiv Detail & Related papers (2020-02-25T01:36:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.