Enhancing Student Performance Prediction on Learnersourced Questions
with SGNN-LLM Synergy
- URL: http://arxiv.org/abs/2309.13500v3
- Date: Mon, 29 Jan 2024 01:11:48 GMT
- Title: Enhancing Student Performance Prediction on Learnersourced Questions
with SGNN-LLM Synergy
- Authors: Lin Ni, Sijie Wang, Zeyu Zhang, Xiaoxuan Li, Xianda Zheng, Paul Denny,
and Jiamou Liu
- Abstract summary: We introduce an innovative strategy that synergizes the potential of integrating Signed Graph Neural Networks (SGNNs) and Large Language Model (LLM) embeddings.
Our methodology employs a signed bipartite graph to comprehensively model student answers, complemented by a contrastive learning framework that enhances noise resilience.
- Score: 11.735587384038753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learnersourcing offers great potential for scalable education through student
content creation. However, predicting student performance on learnersourced
questions, which is essential for personalizing the learning experience, is
challenging due to the inherent noise in student-generated data. Moreover,
while conventional graph-based methods can capture the complex network of
student and question interactions, they often fall short under cold start
conditions where limited student engagement with questions yields sparse data.
To address both challenges, we introduce an innovative strategy that synergizes
the potential of integrating Signed Graph Neural Networks (SGNNs) and Large
Language Model (LLM) embeddings. Our methodology employs a signed bipartite
graph to comprehensively model student answers, complemented by a contrastive
learning framework that enhances noise resilience. Furthermore, LLM's
contribution lies in generating foundational question embeddings, proving
especially advantageous in addressing cold start scenarios characterized by
limited graph data. Validation across five real-world datasets sourced from the
PeerWise platform underscores our approach's effectiveness. Our method
outperforms baselines, showcasing enhanced predictive accuracy and robustness.
Related papers
- Enhancing Graph Neural Networks with Limited Labeled Data by Actively Distilling Knowledge from Large Language Models [30.867447814409623]
Graph neural networks (GNNs) have great ability in node classification, a fundamental task on graphs.
We propose a novel approach that integrates Large Language Models (LLMs) and GNNs.
Our model in improving node classification accuracy with considerably limited labeled data, surpassing state-of-the-art baselines by significant margins.
arXiv Detail & Related papers (2024-07-19T02:34:10Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning [25.90420385230675]
The pursuit of personalized education has led to the integration of Large Language Models (LLMs) in developing intelligent tutoring systems.
Our research uncovers a fundamental challenge in this approach: the Student Data Paradox''
This paradox emerges when LLMs, trained on student data to understand learner behavior, inadvertently compromise their own factual knowledge and reasoning abilities.
arXiv Detail & Related papers (2024-04-23T15:57:55Z) - Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction [104.29108668347727]
This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models.
The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies.
We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
arXiv Detail & Related papers (2023-07-03T16:01:45Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Knowledge Tracing for Complex Problem Solving: Granular Rank-Based
Tensor Factorization [6.077274947471846]
We propose a novel student knowledge tracing approach, Granular RAnk based TEnsor factorization (GRATE)
GRATE selects student attempts that can be aggregated while predicting students' performance in problems and discovering the concepts presented in them.
Our experiments on three real-world datasets demonstrate the improved performance of GRATE, compared to the state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-06T06:22:46Z) - Interpretable Knowledge Tracing: Simple and Efficient Student Modeling
with Causal Relations [21.74631969428855]
Interpretable Knowledge Tracing (IKT) is a simple model that relies on three meaningful latent features.
IKT's prediction of future student performance is made using a Tree-Augmented Naive Bayes (TAN)
IKT has great potential for providing adaptive and personalized instructions with causal reasoning in real-world educational systems.
arXiv Detail & Related papers (2021-12-15T19:05:48Z) - Peer-inspired Student Performance Prediction in Interactive Online
Question Pools with Graph Neural Network [56.62345811216183]
We propose a novel approach using Graph Neural Networks (GNNs) to achieve better student performance prediction in interactive online question pools.
Specifically, we model the relationship between students and questions using student interactions to construct the student-interaction-question network.
We evaluate the effectiveness of our approach on a real-world dataset consisting of 104,113 mouse trajectories generated in the problem-solving process of over 4000 students on 1631 questions.
arXiv Detail & Related papers (2020-08-04T14:55:32Z) - Analyzing Student Strategies In Blended Courses Using Clickstream Data [32.81171098036632]
We use pattern mining and models borrowed from Natural Language Processing to understand student interactions.
Fine-grained clickstream data is collected through Diderot, a non-commercial educational support system.
Our results suggest that the proposed hybrid NLP methods can provide valuable insights even in the low-data setting of blended courses.
arXiv Detail & Related papers (2020-05-31T03:01:00Z) - Mining Implicit Entity Preference from User-Item Interaction Data for
Knowledge Graph Completion via Adversarial Learning [82.46332224556257]
We propose a novel adversarial learning approach by leveraging user interaction data for the Knowledge Graph Completion task.
Our generator is isolated from user interaction data, and serves to improve the performance of the discriminator.
To discover implicit entity preference of users, we design an elaborate collaborative learning algorithms based on graph neural networks.
arXiv Detail & Related papers (2020-03-28T05:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.