Knowledge Tracing for Complex Problem Solving: Granular Rank-Based
Tensor Factorization
- URL: http://arxiv.org/abs/2210.09013v1
- Date: Thu, 6 Oct 2022 06:22:46 GMT
- Title: Knowledge Tracing for Complex Problem Solving: Granular Rank-Based
Tensor Factorization
- Authors: Chunpai Wang, Shaghayegh Sahebi, Siqian Zhao, Peter Brusilovsky, Laura
O. Moraes
- Abstract summary: We propose a novel student knowledge tracing approach, Granular RAnk based TEnsor factorization (GRATE)
GRATE selects student attempts that can be aggregated while predicting students' performance in problems and discovering the concepts presented in them.
Our experiments on three real-world datasets demonstrate the improved performance of GRATE, compared to the state-of-the-art baselines.
- Score: 6.077274947471846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Tracing (KT), which aims to model student knowledge level and
predict their performance, is one of the most important applications of user
modeling. Modern KT approaches model and maintain an up-to-date state of
student knowledge over a set of course concepts according to students'
historical performance in attempting the problems. However, KT approaches were
designed to model knowledge by observing relatively small problem-solving steps
in Intelligent Tutoring Systems. While these approaches were applied
successfully to model student knowledge by observing student solutions for
simple problems, they do not perform well for modeling complex problem solving
in students.M ost importantly, current models assume that all problem attempts
are equally valuable in quantifying current student knowledge.However, for
complex problems that involve many concepts at the same time, this assumption
is deficient. In this paper, we argue that not all attempts are equivalently
important in discovering students' knowledge state, and some attempts can be
summarized together to better represent student performance. We propose a novel
student knowledge tracing approach, Granular RAnk based TEnsor factorization
(GRATE), that dynamically selects student attempts that can be aggregated while
predicting students' performance in problems and discovering the concepts
presented in them. Our experiments on three real-world datasets demonstrate the
improved performance of GRATE, compared to the state-of-the-art baselines, in
the task of student performance prediction. Our further analysis shows that
attempt aggregation eliminates the unnecessary fluctuations from students'
discovered knowledge states and helps in discovering complex latent concepts in
the problems.
Related papers
- LLM-based Cognitive Models of Students with Misconceptions [55.29525439159345]
This paper investigates whether Large Language Models (LLMs) can be instruction-tuned to meet this dual requirement.
We introduce MalAlgoPy, a novel Python library that generates datasets reflecting authentic student solution patterns.
Our insights enhance our understanding of AI-based student models and pave the way for effective adaptive learning systems.
arXiv Detail & Related papers (2024-10-16T06:51:09Z) - BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts [59.83547898874152]
We introduce BloomWise, a new prompting technique, inspired by Bloom's taxonomy, to improve the performance of Large Language Models (LLMs)
The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM.
In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-10-05T09:27:52Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning [25.90420385230675]
The pursuit of personalized education has led to the integration of Large Language Models (LLMs) in developing intelligent tutoring systems.
Our research uncovers a fundamental challenge in this approach: the Student Data Paradox''
This paradox emerges when LLMs, trained on student data to understand learner behavior, inadvertently compromise their own factual knowledge and reasoning abilities.
arXiv Detail & Related papers (2024-04-23T15:57:55Z) - Enhancing Student Performance Prediction on Learnersourced Questions
with SGNN-LLM Synergy [11.735587384038753]
We introduce an innovative strategy that synergizes the potential of integrating Signed Graph Neural Networks (SGNNs) and Large Language Model (LLM) embeddings.
Our methodology employs a signed bipartite graph to comprehensively model student answers, complemented by a contrastive learning framework that enhances noise resilience.
arXiv Detail & Related papers (2023-09-23T23:37:55Z) - A Probabilistic Generative Model for Tracking Multi-Knowledge Concept
Mastery Probability [8.920928164556171]
We propose an inTerpretable pRobAbilistiC gEnerative moDel (TRACED) which can track students' numerous knowledge concepts mastery probabilities over time.
We conduct experiments with four real-world datasets in three knowledge-driven tasks.
The experimental results show that TRACED outperforms existing knowledge tracing methods in predicting students' future performance.
arXiv Detail & Related papers (2023-02-17T03:50:49Z) - GLUECons: A Generic Benchmark for Learning Under Constraints [102.78051169725455]
In this work, we create a benchmark that is a collection of nine tasks in the domains of natural language processing and computer vision.
We model external knowledge as constraints, specify the sources of the constraints for each task, and implement various models that use these constraints.
arXiv Detail & Related papers (2023-02-16T16:45:36Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Interpretable Knowledge Tracing: Simple and Efficient Student Modeling
with Causal Relations [21.74631969428855]
Interpretable Knowledge Tracing (IKT) is a simple model that relies on three meaningful latent features.
IKT's prediction of future student performance is made using a Tree-Augmented Naive Bayes (TAN)
IKT has great potential for providing adaptive and personalized instructions with causal reasoning in real-world educational systems.
arXiv Detail & Related papers (2021-12-15T19:05:48Z) - Relaxed Clustered Hawkes Process for Procrastination Modeling in MOOCs [1.6822770693792826]
We propose a novel personalized Hawkes process model (RCHawkes-Gamma) that discovers meaningful student behavior clusters.
Our experiments on both synthetic and real-world education datasets show that RCHawkes-Gamma can effectively recover student clusters.
arXiv Detail & Related papers (2021-01-29T22:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.