Prerequisite-driven Q-matrix Refinement for Learner Knowledge
Assessment: A Case Study in Online Learning Context
- URL: http://arxiv.org/abs/2208.12642v2
- Date: Wed, 31 Aug 2022 03:00:25 GMT
- Title: Prerequisite-driven Q-matrix Refinement for Learner Knowledge
Assessment: A Case Study in Online Learning Context
- Authors: Wenbin Gan and Yuan Sun
- Abstract summary: We propose a prerequisite-driven Q-matrix refinement framework for learner knowledge assessment (PQRLKA) in online context.
We infer the prerequisites from learners' response data and use it to refine the expert-defined Q-matrix.
Based on the refined Q-matrix, we propose a Metapath2Vec enhanced convolutional representation method to obtain the comprehensive representations of the items.
- Score: 2.221779410386775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ever growing abundance of learning traces in the online learning
platforms promises unique insights into the learner knowledge assessment (LKA),
a fundamental personalized-tutoring technique for enabling various further
adaptive tutoring services in these platforms. Precise assessment of learner
knowledge requires the fine-grained Q-matrix, which is generally designed by
experts to map the items to skills in the domain. Due to the subjective
tendency, some misspecifications may degrade the performance of LKA. Some
efforts have been made to refine the small-scale Q-matrix, however, it is
difficult to extend the scalability and apply these methods to the large-scale
online learning context with numerous items and massive skills. Moreover, the
existing LKA models employ flexible deep learning models that excel at this
task, but the adequacy of LKA is still challenged by the representation
capability of the models on the quite sparse item-skill graph and the learners'
exercise data. To overcome these issues, in this paper we propose a
prerequisite-driven Q-matrix refinement framework for learner knowledge
assessment (PQRLKA) in online context. We infer the prerequisites from
learners' response data and use it to refine the expert-defined Q-matrix, which
enables the interpretability and the scalability to apply it to the large-scale
online learning context. Based on the refined Q-matrix, we propose a
Metapath2Vec enhanced convolutional representation method to obtain the
comprehensive representations of the items with rich information, and feed them
to the PQRLKA model to finally assess the learners' knowledge. Experiments
conducted on three real-world datasets demonstrate the capability of our model
to infer the prerequisites for Q-matrix refinement, and also its superiority
for the LKA task.
Related papers
- Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Limited Out-of-Context Knowledge Reasoning in Large Language Models [65.72847298578071]
Large Language Models (LLMs) have demonstrated strong capabilities as knowledge bases and significant in-context reasoning capabilities.
This paper focuses on a significant facet of out-of-context reasoning: Out-of-context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Predictive, scalable and interpretable knowledge tracing on structured domains [6.860460230412773]
PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics.
PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and learning histories.
arXiv Detail & Related papers (2024-03-19T22:19:29Z) - A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - A Survey on Knowledge Distillation of Large Language Models [102.84645991075283]
Knowledge Distillation (KD) emerges as a pivotal methodology for transferring advanced capabilities to open-source models.
This paper presents a comprehensive survey of KD's role within the realm of Large Language Models (LLMs)
arXiv Detail & Related papers (2024-02-20T16:17:37Z) - Attentive Q-Matrix Learning for Knowledge Tracing [4.863310073296471]
We propose Q-matrix-based Attentive Knowledge Tracing (QAKT) as an end-to-end style model.
QAKT is capable of modeling problems hierarchically and learning the q-matrix efficiently based on students' sequences.
Results of further experiments suggest that the q-matrix learned by QAKT is highly model-agnostic and more information-sufficient than the one labeled by human experts.
arXiv Detail & Related papers (2023-04-06T12:31:34Z) - Implicit Offline Reinforcement Learning via Supervised Learning [83.8241505499762]
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset collected by policies of different expertise levels.
We show how implicit models can leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets.
arXiv Detail & Related papers (2022-10-21T21:59:42Z) - LM-CORE: Language Models with Contextually Relevant External Knowledge [13.451001884972033]
We argue that storing large amounts of knowledge in the model parameters is sub-optimal given the ever-growing amounts of knowledge and resource requirements.
We present LM-CORE -- a general framework to achieve this -- that allows textitdecoupling of the language model training from the external knowledge source.
Experimental results show that LM-CORE, having access to external knowledge, achieves significant and robust outperformance over state-of-the-art knowledge-enhanced language models on knowledge probing tasks.
arXiv Detail & Related papers (2022-08-12T18:59:37Z) - Online Target Q-learning with Reverse Experience Replay: Efficiently
finding the Optimal Policy for Linear MDPs [50.75812033462294]
We bridge the gap between practical success of Q-learning and pessimistic theoretical results.
We present novel methods Q-Rex and Q-RexDaRe.
We show that Q-Rex efficiently finds the optimal policy for linear MDPs.
arXiv Detail & Related papers (2021-10-16T01:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.