Quiz-based Knowledge Tracing
- URL: http://arxiv.org/abs/2304.02413v2
- Date: Thu, 6 Apr 2023 12:15:41 GMT
- Title: Quiz-based Knowledge Tracing
- Authors: Shuanghong Shen, Enhong Chen, Bihan Xu, Qi Liu, Zhenya Huang, Linbo
Zhu, Yu Su
- Abstract summary: Knowledge tracing aims to assess individuals' evolving knowledge states according to their learning interactions.
QKT achieves state-of-the-art performance compared to existing methods.
- Score: 61.9152637457605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge tracing (KT) aims to assess individuals' evolving knowledge states
according to their learning interactions with different exercises in online
learning systems (OIS), which is critical in supporting decision-making for
subsequent intelligent services, such as personalized learning source
recommendation. Existing researchers have broadly studied KT and developed many
effective methods. However, most of them assume that students' historical
interactions are uniformly distributed in a continuous sequence, ignoring the
fact that actual interaction sequences are organized based on a series of
quizzes with clear boundaries, where interactions within a quiz are
consecutively completed, but interactions across different quizzes are discrete
and may be spaced over days. In this paper, we present the Quiz-based Knowledge
Tracing (QKT) model to monitor students' knowledge states according to their
quiz-based learning interactions. Specifically, as students' interactions
within a quiz are continuous and have the same or similar knowledge concepts,
we design the adjacent gate followed by a global average pooling layer to
capture the intra-quiz short-term knowledge influence. Then, as various quizzes
tend to focus on different knowledge concepts, we respectively measure the
inter-quiz knowledge substitution by the gated recurrent unit and the
inter-quiz knowledge complementarity by the self-attentive encoder with a novel
recency-aware attention mechanism. Finally, we integrate the inter-quiz
long-term knowledge substitution and complementarity across different quizzes
to output students' evolving knowledge states. Extensive experimental results
on three public real-world datasets demonstrate that QKT achieves
state-of-the-art performance compared to existing methods. Further analyses
confirm that QKT is promising in designing more effective quizzes.
Related papers
- Private Knowledge Sharing in Distributed Learning: A Survey [50.51431815732716]
The rise of Artificial Intelligence has revolutionized numerous industries and transformed the way society operates.
It is crucial to utilize information in learning processes that are either distributed or owned by different entities.
Modern data-driven services have been developed to integrate distributed knowledge entities into their outcomes.
arXiv Detail & Related papers (2024-02-08T07:18:23Z) - Enhancing Cognitive Diagnosis using Un-interacted Exercises: A
Collaboration-aware Mixed Sampling Approach [22.696866034847343]
We present the Collaborative-aware Mixed Exercise Sampling (CMES) framework.
CMES framework can effectively exploit the information present in un-interacted exercises linked to un-interacted knowledge concepts.
We also propose a ranking-based pseudo feedback module to regulate students' responses on generated exercises.
arXiv Detail & Related papers (2023-12-15T07:44:10Z) - NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition [63.90327120065928]
We propose a Nested Collaborative Learning (NCL++) which tackles the long-tailed learning problem by a collaborative learning.
To achieve the collaborative learning in long-tailed learning, the balanced online distillation is proposed.
In order to improve the meticulous distinguishing ability on the confusing categories, we further propose a Hard Category Mining.
arXiv Detail & Related papers (2023-06-29T06:10:40Z) - Learning to Retain while Acquiring: Combating Distribution-Shift in
Adversarial Data-Free Knowledge Distillation [31.294947552032088]
Data-free Knowledge Distillation (DFKD) has gained popularity recently, with the fundamental idea of carrying out knowledge transfer from a Teacher to a Student neural network in the absence of training data.
We propose a meta-learning inspired framework by treating the task of Knowledge-Acquisition (learning from newly generated samples) and Knowledge-Retention (retaining knowledge on previously met samples) as meta-train and meta-test.
arXiv Detail & Related papers (2023-02-28T03:50:56Z) - Transition-Aware Multi-Activity Knowledge Tracing [2.9778695679660188]
Knowledge tracing aims to model student knowledge state given the student's sequence of learning activities.
Current KT solutions are not fit for modeling student learning from non-assessed learning activities.
We propose Transition-Aware Multi-activity Knowledge Tracing (TAMKOT)
arXiv Detail & Related papers (2023-01-26T21:49:24Z) - HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge
Tracing [35.02243127325724]
Knowledge tracing (KT) aims to leverage students' learning histories to estimate their mastery levels on a set of pre-defined skills, based on which the corresponding future performance can be accurately predicted.
In practice, a student's learning history comprises answers to sets of massed questions, each known as a session, rather than merely being a sequence of independent answers.
Most existing KT models treat student's learning records as a single continuing sequence, without capturing the sessional shift of students' knowledge state.
arXiv Detail & Related papers (2022-12-23T04:22:42Z) - Selecting Related Knowledge via Efficient Channel Attention for Online
Continual Learning [4.109784267309124]
We propose a new framework, named Selecting Related Knowledge for Online Continual Learning (SRKOCL)
Our model also combines experience replay and knowledge distillation to circumvent the catastrophic forgetting.
arXiv Detail & Related papers (2022-09-09T09:59:54Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.