Application of Deep Self-Attention in Knowledge Tracing
- URL: http://arxiv.org/abs/2105.07909v1
- Date: Mon, 17 May 2021 14:45:38 GMT
- Title: Application of Deep Self-Attention in Knowledge Tracing
- Authors: Junhao Zeng, Qingchun Zhang, Ning Xie, Bochun Yang
- Abstract summary: This paper proposed Deep Self-Attentive Knowledge Tracing (DSAKT) based on the data of PTA, an online assessment system used by students in many universities in China.
Experimentation on the data of PTA shows that DSAKT outperforms the other models for knowledge tracing an improvement of 2.1% on average.
- Score: 2.5852720579998336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of intelligent tutoring system has greatly influenced the way
students learn and practice, which increases their learning efficiency. The
intelligent tutoring system must model learners' mastery of the knowledge
before providing feedback and advices to learners, so one class of algorithm
called "knowledge tracing" is surely important. This paper proposed Deep
Self-Attentive Knowledge Tracing (DSAKT) based on the data of PTA, an online
assessment system used by students in many universities in China, to help these
students learn more efficiently. Experimentation on the data of PTA shows that
DSAKT outperforms the other models for knowledge tracing an improvement of AUC
by 2.1% on average, and this model also has a good performance on the ASSIST
dataset.
Related papers
- Bridging the Gap: Unpacking the Hidden Challenges in Knowledge Distillation for Online Ranking Systems [13.437632008276552]
Knowledge Distillation (KD) is a powerful approach for compressing a large model into a smaller, more efficient model.
We present a robust KD system developed and rigorously evaluated on multiple large-scale personalized video recommendation systems within Google.
arXiv Detail & Related papers (2024-08-26T23:01:48Z) - Enhancing Deep Knowledge Tracing via Diffusion Models for Personalized Adaptive Learning [1.2248793682283963]
This study aims to tackle data shortage issues in student learning records to enhance DKT performance for personalized adaptive learning (PAL)
It employs TabDDPM, a diffusion model, to generate synthetic educational records to augment training data for enhancing DKT.
The experimental results demonstrate that the AI-generated data by TabDDPM significantly improves DKT performance.
arXiv Detail & Related papers (2024-04-25T00:23:20Z) - Lessons Learned from Designing an Open-Source Automated Feedback System
for STEM Education [5.326069675013602]
We present RATsApp, an open-source automated feedback system (AFS) that incorporates research-based features such as formative feedback.
The system focuses on core STEM competencies such as mathematical competence, representational competence, and data literacy.
As an open-source platform, RATsApp encourages public contributions to its ongoing development, fostering a collaborative approach to improve educational tools.
arXiv Detail & Related papers (2024-01-19T07:13:07Z) - Knowledge Tracing Challenge: Optimal Activity Sequencing for Students [0.9814642627359286]
Knowledge tracing is a method used in education to assess and track the acquisition of knowledge by individual learners.
We will present the results of the implementation of two Knowledge Tracing algorithms on a newly released dataset as part of the AAAI2023 Global Knowledge Tracing Challenge.
arXiv Detail & Related papers (2023-11-13T16:28:34Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Adaptive Learning Path Navigation Based on Knowledge Tracing and
Reinforcement Learning [2.0263791972068628]
This paper introduces the Adaptive Learning Path Navigation (ALPN) system, a novel approach for enhancing E-learning platforms.
The ALPN system tailors the learning path to students' needs, significantly increasing learning effectiveness.
Experimental results demonstrate that the ALPN system outperforms previous research by 8.2% in maximizing learning outcomes.
arXiv Detail & Related papers (2023-05-08T05:54:29Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Better Teacher Better Student: Dynamic Prior Knowledge for Knowledge
Distillation [70.92135839545314]
We propose the dynamic prior knowledge (DPK), which integrates part of teacher's features as the prior knowledge before the feature distillation.
Our DPK makes the performance of the student model positively correlated with that of the teacher model, which means that we can further boost the accuracy of students by applying larger teachers.
arXiv Detail & Related papers (2022-06-13T11:52:13Z) - A Closer Look at Knowledge Distillation with Features, Logits, and
Gradients [81.39206923719455]
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
This work provides a new perspective to motivate a set of knowledge distillation strategies by approximating the classical KL-divergence criteria with different knowledge sources.
Our analysis indicates that logits are generally a more efficient knowledge source and suggests that having sufficient feature dimensions is crucial for the model design.
arXiv Detail & Related papers (2022-03-18T21:26:55Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Role-Wise Data Augmentation for Knowledge Distillation [48.115719640111394]
Knowledge Distillation (KD) is a common method for transferring the knowledge'' learned by one machine learning model into another.
We design data augmentation agents with distinct roles to facilitate knowledge distillation.
We find empirically that specially tailored data points enable the teacher's knowledge to be demonstrated more effectively to the student.
arXiv Detail & Related papers (2020-04-19T14:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.