Improving Knowledge Tracing via Pre-training Question Embeddings
- URL: http://arxiv.org/abs/2012.05031v1
- Date: Wed, 9 Dec 2020 13:21:23 GMT
- Title: Improving Knowledge Tracing via Pre-training Question Embeddings
- Authors: Yunfei Liu, Yang Yang, Xianyu Chen, Jian Shen, Haifeng Zhang, Yong Yu
- Abstract summary: Knowledge tracing (KT) defines the task of predicting whether students can correctly answer questions based on their historical response.
In this paper, we demonstrate that large gains on KT can be realized by pre-training embeddings for each question on abundant side information.
To be specific, the side information includes question difficulty and three kinds of relations contained in a bipartite graph between questions and skills.
- Score: 25.611547414936553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge tracing (KT) defines the task of predicting whether students can
correctly answer questions based on their historical response. Although much
research has been devoted to exploiting the question information, plentiful
advanced information among questions and skills hasn't been well extracted,
making it challenging for previous work to perform adequately. In this paper,
we demonstrate that large gains on KT can be realized by pre-training
embeddings for each question on abundant side information, followed by training
deep KT models on the obtained embeddings. To be specific, the side information
includes question difficulty and three kinds of relations contained in a
bipartite graph between questions and skills. To pre-train the question
embeddings, we propose to use product-based neural networks to recover the side
information. As a result, adopting the pre-trained embeddings in existing deep
KT models significantly outperforms state-of-the-art baselines on three common
KT datasets.
Related papers
- Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge Tracing [59.480951050911436]
We present KCQRL, a framework for automated knowledge concept annotation and question representation learning.
We demonstrate the effectiveness of KCQRL across 15 KT algorithms on two large real-world Math learning datasets.
arXiv Detail & Related papers (2024-10-02T16:37:19Z) - Enhancing Knowledge Tracing with Concept Map and Response Disentanglement [5.201585012263761]
We propose the Concept map-driven Response disentanglement method for enhancing Knowledge Tracing (CRKT) model.
CRKT benefits KT by directly leveraging answer choices--beyond merely identifying correct or incorrect answers--to distinguish responses with different incorrect choices.
We further introduce the novel use of unchosen responses by employing disentangled representations to get insights from options not selected by students.
arXiv Detail & Related papers (2024-08-23T11:25:56Z) - SINKT: A Structure-Aware Inductive Knowledge Tracing Model with Large Language Model [64.92472567841105]
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question.
Structure-aware Inductive Knowledge Tracing model with large language model (dubbed SINKT)
SINKT predicts the student's response to the target question by interacting with the student's knowledge state and the question representation.
arXiv Detail & Related papers (2024-07-01T12:44:52Z) - A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - A Survey on Temporal Knowledge Graph Completion: Taxonomy, Progress, and
Prospects [73.44022660932087]
temporal characteristics are prominently evident in a substantial volume of knowledge.
The continuous emergence of new knowledge, the weakness of the algorithm for extracting structured information from unstructured data, and the lack of information in the source dataset are cited.
The task of Temporal Knowledge Graph Completion (TKGC) has attracted increasing attention, aiming to predict missing items based on the available information.
arXiv Detail & Related papers (2023-08-04T16:49:54Z) - Improving Interpretability of Deep Sequential Knowledge Tracing Models
with Question-centric Cognitive Representations [22.055683237994696]
We present QIKT, a question-centric interpretable KT model to address the above challenges.
The proposed QIKT approach explicitly models students' knowledge state variations at a fine-grained level.
It outperforms a wide range of deep learning based KT models in terms of prediction accuracy with better model interpretability.
arXiv Detail & Related papers (2023-02-14T08:14:30Z) - A Survey of Knowledge Tracing: Models, Variants, and Applications [70.69281873057619]
Knowledge Tracing is one of the fundamental tasks for student behavioral data analysis.
We present three types of fundamental KT models with distinct technical routes.
We discuss potential directions for future research in this rapidly growing field.
arXiv Detail & Related papers (2021-05-06T13:05:55Z) - Knowledge-Routed Visual Question Reasoning: Challenges for Deep
Representation Embedding [140.5911760063681]
We propose a novel dataset named Knowledge-Routed Visual Question Reasoning for VQA model evaluation.
We generate the question-answer pair based on both the Visual Genome scene graph and an external knowledge base with controlled programs.
arXiv Detail & Related papers (2020-12-14T00:33:44Z) - GIKT: A Graph-based Interaction Model for Knowledge Tracing [36.07642261246016]
We propose a Graph-based Interaction model for Knowledge Tracing (GIKT) to tackle the above probems.
More specifically, GIKT utilizes graph convolutional network (GCN) to substantially incorporate question-skill correlations.
Experiments on three datasets demonstrate that GIKT achieves the new state-of-the-art performance, with at least 1% absolute AUC improvement.
arXiv Detail & Related papers (2020-09-13T12:50:32Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - qDKT: Question-centric Deep Knowledge Tracing [29.431121650577396]
We introduce qDKT, a variant of DKT that models every learner's success probability on individual questions over time.
qDKT incorporates graph Laplacian regularization to smooth predictions under each skill.
Experiments on several real-world datasets show that qDKT achieves state-of-art performance on predicting learner outcomes.
arXiv Detail & Related papers (2020-05-25T23:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.