Prerequisite-driven Q-matrix Refinement for Learner Knowledge
Assessment: A Case Study in Online Learning Context
- URL: http://arxiv.org/abs/2208.12642v2
- Date: Wed, 31 Aug 2022 03:00:25 GMT
- Title: Prerequisite-driven Q-matrix Refinement for Learner Knowledge
Assessment: A Case Study in Online Learning Context
- Authors: Wenbin Gan and Yuan Sun
- Abstract summary: We propose a prerequisite-driven Q-matrix refinement framework for learner knowledge assessment (PQRLKA) in online context.
We infer the prerequisites from learners' response data and use it to refine the expert-defined Q-matrix.
Based on the refined Q-matrix, we propose a Metapath2Vec enhanced convolutional representation method to obtain the comprehensive representations of the items.
- Score: 2.221779410386775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ever growing abundance of learning traces in the online learning
platforms promises unique insights into the learner knowledge assessment (LKA),
a fundamental personalized-tutoring technique for enabling various further
adaptive tutoring services in these platforms. Precise assessment of learner
knowledge requires the fine-grained Q-matrix, which is generally designed by
experts to map the items to skills in the domain. Due to the subjective
tendency, some misspecifications may degrade the performance of LKA. Some
efforts have been made to refine the small-scale Q-matrix, however, it is
difficult to extend the scalability and apply these methods to the large-scale
online learning context with numerous items and massive skills. Moreover, the
existing LKA models employ flexible deep learning models that excel at this
task, but the adequacy of LKA is still challenged by the representation
capability of the models on the quite sparse item-skill graph and the learners'
exercise data. To overcome these issues, in this paper we propose a
prerequisite-driven Q-matrix refinement framework for learner knowledge
assessment (PQRLKA) in online context. We infer the prerequisites from
learners' response data and use it to refine the expert-defined Q-matrix, which
enables the interpretability and the scalability to apply it to the large-scale
online learning context. Based on the refined Q-matrix, we propose a
Metapath2Vec enhanced convolutional representation method to obtain the
comprehensive representations of the items with rich information, and feed them
to the PQRLKA model to finally assess the learners' knowledge. Experiments
conducted on three real-world datasets demonstrate the capability of our model
to infer the prerequisites for Q-matrix refinement, and also its superiority
for the LKA task.
Related papers
- WisdomBot: Tuning Large Language Models with Artificial Intelligence Knowledge [17.74988145184004]
Large language models (LLMs) have emerged as powerful tools in natural language processing (NLP)
This paper presents a novel LLM for education named WisdomBot, which combines the power of LLMs with educational theories.
We introduce two key enhancements during inference, i.e., local knowledge base retrieval augmentation and search engine retrieval augmentation during inference.
arXiv Detail & Related papers (2025-01-22T13:36:46Z) - KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [73.34893326181046]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Predictive, scalable and interpretable knowledge tracing on structured domains [6.860460230412773]
PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics.
PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and learning histories.
arXiv Detail & Related papers (2024-03-19T22:19:29Z) - A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - Attentive Q-Matrix Learning for Knowledge Tracing [4.863310073296471]
We propose Q-matrix-based Attentive Knowledge Tracing (QAKT) as an end-to-end style model.
QAKT is capable of modeling problems hierarchically and learning the q-matrix efficiently based on students' sequences.
Results of further experiments suggest that the q-matrix learned by QAKT is highly model-agnostic and more information-sufficient than the one labeled by human experts.
arXiv Detail & Related papers (2023-04-06T12:31:34Z) - Online Target Q-learning with Reverse Experience Replay: Efficiently
finding the Optimal Policy for Linear MDPs [50.75812033462294]
We bridge the gap between practical success of Q-learning and pessimistic theoretical results.
We present novel methods Q-Rex and Q-RexDaRe.
We show that Q-Rex efficiently finds the optimal policy for linear MDPs.
arXiv Detail & Related papers (2021-10-16T01:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.