Attentive Q-Matrix Learning for Knowledge Tracing
- URL: http://arxiv.org/abs/2304.08168v2
- Date: Wed, 17 May 2023 07:38:56 GMT
- Title: Attentive Q-Matrix Learning for Knowledge Tracing
- Authors: Zhongfeng Jia, Wei Su, Jiamin Liu, Wenli Yue
- Abstract summary: We propose Q-matrix-based Attentive Knowledge Tracing (QAKT) as an end-to-end style model.
QAKT is capable of modeling problems hierarchically and learning the q-matrix efficiently based on students' sequences.
Results of further experiments suggest that the q-matrix learned by QAKT is highly model-agnostic and more information-sufficient than the one labeled by human experts.
- Score: 4.863310073296471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the rapid development of Intelligent Tutoring Systems (ITS) in the past
decade, tracing the students' knowledge state has become more and more
important in order to provide individualized learning guidance. This is the
main idea of Knowledge Tracing (KT), which models students' mastery of
knowledge concepts (KCs, skills needed to solve a question) based on their past
interactions on platforms. Plenty of KT models have been proposed and have
shown remarkable performance recently. However, the majority of these models
use concepts to index questions, which means the predefined skill tags for each
question are required in advance to indicate the KCs needed to answer that
question correctly. This makes it pretty hard to apply on large-scale online
education platforms where questions are often not well-organized by skill tags.
In this paper, we propose Q-matrix-based Attentive Knowledge Tracing (QAKT), an
end-to-end style model that is able to apply the attentive method to scenes
where no predefined skill tags are available without sacrificing its
performance. With a novel hybrid embedding method based on the q-matrix and
Rasch model, QAKT is capable of modeling problems hierarchically and learning
the q-matrix efficiently based on students' sequences. Meanwhile, the
architecture of QAKT ensures that it is friendly to questions associated with
multiple skills and has outstanding interpretability. After conducting
experiments on a variety of open datasets, we empirically validated that our
model shows similar or even better performance than state-of-the-art KT
methods. Results of further experiments suggest that the q-matrix learned by
QAKT is highly model-agnostic and more information-sufficient than the one
labeled by human experts, which could help with the data mining tasks in
existing ITSs.
Related papers
- KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [75.78948575957081]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge Tracing [59.480951050911436]
We present KCQRL, a framework for automated knowledge concept annotation and question representation learning.
We demonstrate the effectiveness of KCQRL across 15 KT algorithms on two large real-world Math learning datasets.
arXiv Detail & Related papers (2024-10-02T16:37:19Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - APGKT: Exploiting Associative Path on Skills Graph for Knowledge Tracing [8.751819506454964]
We propose a KT model, called APGKT, that exploits skill modes.
We extract the subgraph topology of the skills involved in the question and combine the difficulty level of the skills to obtain the skill modes via encoding.
We obtain a student's higher-order cognitive states of skills, which is used to predict the student's future answering performance.
arXiv Detail & Related papers (2022-10-05T17:08:18Z) - Prerequisite-driven Q-matrix Refinement for Learner Knowledge
Assessment: A Case Study in Online Learning Context [2.221779410386775]
We propose a prerequisite-driven Q-matrix refinement framework for learner knowledge assessment (PQRLKA) in online context.
We infer the prerequisites from learners' response data and use it to refine the expert-defined Q-matrix.
Based on the refined Q-matrix, we propose a Metapath2Vec enhanced convolutional representation method to obtain the comprehensive representations of the items.
arXiv Detail & Related papers (2022-08-24T08:44:08Z) - KILT: a Benchmark for Knowledge Intensive Language Tasks [102.33046195554886]
We present a benchmark for knowledge-intensive language tasks (KILT)
All tasks in KILT are grounded in the same snapshot of Wikipedia.
We find that a shared dense vector index coupled with a seq2seq model is a strong baseline.
arXiv Detail & Related papers (2020-09-04T15:32:19Z) - qDKT: Question-centric Deep Knowledge Tracing [29.431121650577396]
We introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time.
qDKT incorporates graph Laplacian regularization to smooth predictions under each skill.
Experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes.
arXiv Detail & Related papers (2020-05-25T23:43:55Z) - KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization
and Completion [99.47414073164656]
A comprehensive knowledge graph (KG) contains an instance-level entity graph and an ontology-level concept graph.
The two-view KG provides a testbed for models to "simulate" human's abilities on knowledge abstraction, concretization, and completion.
We propose a unified KG benchmark by improving existing benchmarks in terms of dataset scale, task coverage, and difficulty.
arXiv Detail & Related papers (2020-04-28T16:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.