Augmenting Interpretable Knowledge Tracing by Ability Attribute and
Attention Mechanism
- URL: http://arxiv.org/abs/2302.02146v1
- Date: Sat, 4 Feb 2023 11:19:55 GMT
- Title: Augmenting Interpretable Knowledge Tracing by Ability Attribute and
Attention Mechanism
- Authors: Yuqi Yue, Xiaoqing Sun, Weidong Ji, Zengxiang Yin, Chenghong Sun
- Abstract summary: Knowledge tracing aims to model students' past answer sequences to track the change in their knowledge acquisition during exercise activities.
Most existing approaches ignore the fact that students' abilities are constantly changing or vary between individuals.
We propose a novel model based on ability attributes and attention mechanism.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge tracing aims to model students' past answer sequences to track the
change in their knowledge acquisition during exercise activities and to predict
their future learning performance. Most existing approaches ignore the fact
that students' abilities are constantly changing or vary between individuals,
and lack the interpretability of model predictions. To this end, in this paper,
we propose a novel model based on ability attributes and attention mechanism.
We first segment the interaction sequences and captures students' ability
attributes, then dynamically assign students to groups with similar abilities,
and quantify the relevance of the exercises to the skill by calculating the
attention weights between the exercises and the skill to enhance the
interpretability of the model. We conducted extensive experiments and evaluate
real online education datasets. The results confirm that the proposed model is
better at predicting performance than five well-known representative knowledge
tracing models, and the model prediction results are explained through an
inference path.
Related papers
- Mamba4KT:An Efficient and Effective Mamba-based Knowledge Tracing Model [8.432717706752937]
Knowledge tracing enhances student learning by leveraging past performance to predict future performance.
Due to the growing amount of data in smart education scenarios, this poses a challenge in terms of time and space consumption for knowledge tracing models.
Mamba4KT is the first to explore enhanced efficiency and resource utilization in knowledge tracing.
arXiv Detail & Related papers (2024-05-26T12:26:03Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - Machine Learning Approach for Predicting Students Academic Performance
and Study Strategies based on their Motivation [0.0]
This research aims to develop machine learning models for students academic performance and study strategies prediction.
Key learning attributes (intrinsic, extrinsic, autonomy, relatedness, competence, and self-esteem) essential for students learning process were used in building the models.
arXiv Detail & Related papers (2022-10-15T04:09:05Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Predicting student performance using sequence classification with
time-based windows [1.5836913530330787]
We show that accurate predictive models can be built based on sequential patterns derived from students' behavioral data.
We present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models.
The results of our improved sequence classification technique are capable of predicting student performance with high levels of accuracy, reaching 90 percent for course-specific models.
arXiv Detail & Related papers (2022-08-16T13:46:39Z) - Plex: Towards Reliability using Pretrained Large Model Extensions [69.13326436826227]
We develop ViT-Plex and T5-Plex, pretrained large model extensions for vision and language modalities, respectively.
Plex greatly improves the state-of-the-art across reliability tasks, and simplifies the traditional protocol.
We demonstrate scaling effects over model sizes up to 1B parameters and pretraining dataset sizes up to 4B examples.
arXiv Detail & Related papers (2022-07-15T11:39:37Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Do we need to go Deep? Knowledge Tracing with Big Data [5.218882272051637]
We use EdNet, the largest student interaction dataset publicly available in the education domain, to understand how accurately both deep and traditional models predict future student performances.
Our work observes that logistic regression models with carefully engineered features outperformed deep models from extensive experimentation.
arXiv Detail & Related papers (2021-01-20T22:40:38Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.