Enhancing the Performance of Automated Grade Prediction in MOOC using
Graph Representation Learning
- URL: http://arxiv.org/abs/2310.12281v1
- Date: Wed, 18 Oct 2023 19:27:39 GMT
- Title: Enhancing the Performance of Automated Grade Prediction in MOOC using
Graph Representation Learning
- Authors: Soheila Farokhi, Aswani Yaramala, Jiangtao Huang, Muhammad F. A. Khan,
Xiaojun Qi, Hamid Karimi
- Abstract summary: Massive Open Online Courses (MOOCs) have gained significant traction as a rapidly growing phenomenon in online learning.
Current automated assessment approaches overlook the structural links between different entities involved in the downstream tasks.
We construct a unique knowledge graph for a large MOOC dataset, which will be publicly available to the research community.
- Score: 3.4882560718166626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Massive Open Online Courses (MOOCs) have gained significant
traction as a rapidly growing phenomenon in online learning. Unlike traditional
classrooms, MOOCs offer a unique opportunity to cater to a diverse audience
from different backgrounds and geographical locations. Renowned universities
and MOOC-specific providers, such as Coursera, offer MOOC courses on various
subjects. Automated assessment tasks like grade and early dropout predictions
are necessary due to the high enrollment and limited direct interaction between
teachers and learners. However, current automated assessment approaches
overlook the structural links between different entities involved in the
downstream tasks, such as the students and courses. Our hypothesis suggests
that these structural relationships, manifested through an interaction graph,
contain valuable information that can enhance the performance of the task at
hand. To validate this, we construct a unique knowledge graph for a large MOOC
dataset, which will be publicly available to the research community.
Furthermore, we utilize graph embedding techniques to extract latent structural
information encoded in the interactions between entities in the dataset. These
techniques do not require ground truth labels and can be utilized for various
tasks. Finally, by combining entity-specific features, behavioral features, and
extracted structural features, we enhance the performance of predictive machine
learning models in student assignment grade prediction. Our experiments
demonstrate that structural features can significantly improve the predictive
performance of downstream assessment tasks. The code and data are available in
\url{https://github.com/DSAatUSU/MOOPer_grade_prediction}
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - Self-Regulated Data-Free Knowledge Amalgamation for Text Classification [9.169836450935724]
We develop a lightweight student network that can learn from multiple teacher models without accessing their original training data.
To accomplish this, we propose STRATANET, a modeling framework that produces text data tailored to each teacher.
We evaluate our method on three benchmark text classification datasets with varying labels or domains.
arXiv Detail & Related papers (2024-06-16T21:13:30Z) - Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph [8.646512035461994]
In visual tasks, large teacher models capture essential features and deep information, enhancing performance.
We propose a distillation framework based on graph knowledge, including a multi-level feature alignment strategy.
We emphasize spectral embedding (SE) as a key technique in our distillation process, which merges the student's feature space with the relational knowledge and structural complexities similar to the teacher network.
arXiv Detail & Related papers (2024-05-14T12:37:05Z) - Distilling Knowledge from Self-Supervised Teacher by Embedding Graph
Alignment [52.704331909850026]
We formulate a new knowledge distillation framework to transfer the knowledge from self-supervised pre-trained models to any other student network.
Inspired by the spirit of instance discrimination in self-supervised learning, we model the instance-instance relations by a graph formulation in the feature embedding space.
Our distillation scheme can be flexibly applied to transfer the self-supervised knowledge to enhance representation learning on various student networks.
arXiv Detail & Related papers (2022-11-23T19:27:48Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Jointly Modeling Heterogeneous Student Behaviors and Interactions Among
Multiple Prediction Tasks [35.15654921278549]
Prediction tasks about students have practical significance for both student and college.
In this paper, we focus on modeling heterogeneous behaviors and making multiple predictions together.
We design three motivating behavior prediction tasks based on a real-world dataset collected from a college.
arXiv Detail & Related papers (2021-03-25T02:01:58Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Analyzing Student Strategies In Blended Courses Using Clickstream Data [32.81171098036632]
We use pattern mining and models borrowed from Natural Language Processing to understand student interactions.
Fine-grained clickstream data is collected through Diderot, a non-commercial educational support system.
Our results suggest that the proposed hybrid NLP methods can provide valuable insights even in the low-data setting of blended courses.
arXiv Detail & Related papers (2020-05-31T03:01:00Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.