Learning Data Teaching Strategies Via Knowledge Tracing
- URL: http://arxiv.org/abs/2111.07083v1
- Date: Sat, 13 Nov 2021 10:10:48 GMT
- Title: Learning Data Teaching Strategies Via Knowledge Tracing
- Authors: Ghodai Abdelrahman, Qing Wang
- Abstract summary: We propose a novel method, called Knowledge Augmented Data Teaching (KADT), to optimize a data teaching strategy for a student model.
The KADT method incorporates a knowledge tracing model to dynamically capture the knowledge progress of a student model in terms of latent learning concepts.
We have evaluated the performance of the KADT method on four different machine learning tasks including knowledge tracing, sentiment analysis, movie recommendation, and image classification.
- Score: 5.648636668261282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Teaching plays a fundamental role in human learning. Typically, a human
teaching strategy would involve assessing a student's knowledge progress for
tailoring the teaching materials in a way that enhances the learning progress.
A human teacher would achieve this by tracing a student's knowledge over
important learning concepts in a task. Albeit, such teaching strategy is not
well exploited yet in machine learning as current machine teaching methods tend
to directly assess the progress on individual training samples without paying
attention to the underlying learning concepts in a learning task. In this
paper, we propose a novel method, called Knowledge Augmented Data Teaching
(KADT), which can optimize a data teaching strategy for a student model by
tracing its knowledge progress over multiple learning concepts in a learning
task. Specifically, the KADT method incorporates a knowledge tracing model to
dynamically capture the knowledge progress of a student model in terms of
latent learning concepts. Then we develop an attention pooling mechanism to
distill knowledge representations of a student model with respect to class
labels, which enables to develop a data teaching strategy on critical training
samples. We have evaluated the performance of the KADT method on four different
machine learning tasks including knowledge tracing, sentiment analysis, movie
recommendation, and image classification. The results comparing to the
state-of-the-art methods empirically validate that KADT consistently
outperforms others on all tasks.
Related papers
- Leveraging Pedagogical Theories to Understand Student Learning Process with Graph-based Reasonable Knowledge Tracing [11.082908318943248]
We introduce GRKT, a graph-based reasonable knowledge tracing method to address these issues.
We propose a fine-grained and psychological three-stage modeling process as knowledge retrieval, memory strengthening, and knowledge learning/forgetting.
arXiv Detail & Related papers (2024-06-07T10:14:30Z) - Revealing Networks: Understanding Effective Teacher Practices in
AI-Supported Classrooms using Transmodal Ordered Network Analysis [0.9187505256430948]
The present study uses transmodal ordered network analysis to understand effective teacher practices in relationship to traditional metrics of in-system learning in a mathematics classroom working with AI tutors.
Comparing teacher practices by student learning rates, we find that students with low learning rates exhibited more hint use after monitoring.
Students with low learning rates showed learning behavior similar to their high learning rate peers, achieving repeated correct attempts in the tutor.
arXiv Detail & Related papers (2023-12-17T21:50:02Z) - Transition-Aware Multi-Activity Knowledge Tracing [2.9778695679660188]
Knowledge tracing aims to model student knowledge state given the student's sequence of learning activities.
Current KT solutions are not fit for modeling student learning from non-assessed learning activities.
We propose Transition-Aware Multi-activity Knowledge Tracing (TAMKOT)
arXiv Detail & Related papers (2023-01-26T21:49:24Z) - A Machine Learning system to monitor student progress in educational
institutes [0.0]
We propose a data driven approach that makes use of Machine Learning techniques to generate a classifier called credit score.
The proposal to use credit score as progress indicator is well suited to be used in a Learning Management System.
arXiv Detail & Related papers (2022-11-02T08:24:08Z) - A Closer Look at Knowledge Distillation with Features, Logits, and
Gradients [81.39206923719455]
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
This work provides a new perspective to motivate a set of knowledge distillation strategies by approximating the classical KL-divergence criteria with different knowledge sources.
Our analysis indicates that logits are generally a more efficient knowledge source and suggests that having sufficient feature dimensions is crucial for the model design.
arXiv Detail & Related papers (2022-03-18T21:26:55Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z) - Learning Student-Friendly Teacher Networks for Knowledge Distillation [50.11640959363315]
We propose a novel knowledge distillation approach to facilitate the transfer of dark knowledge from a teacher to a student.
Contrary to most of the existing methods that rely on effective training of student models given pretrained teachers, we aim to learn the teacher models that are friendly to students.
arXiv Detail & Related papers (2021-02-12T07:00:17Z) - Collaborative Teacher-Student Learning via Multiple Knowledge Transfer [79.45526596053728]
We propose a collaborative teacher-student learning via multiple knowledge transfer (CTSL-MKT)
It allows multiple students learn knowledge from both individual instances and instance relations in a collaborative way.
The experiments and ablation studies on four image datasets demonstrate that the proposed CTSL-MKT significantly outperforms the state-of-the-art KD methods.
arXiv Detail & Related papers (2021-01-21T07:17:04Z) - Introspective Learning by Distilling Knowledge from Online
Self-explanation [36.91213895208838]
We propose an implementation of introspective learning by distilling knowledge from online self-explanations.
The models trained with the introspective learning procedure outperform the ones trained with the standard learning procedure.
arXiv Detail & Related papers (2020-09-19T02:05:32Z) - A Competence-aware Curriculum for Visual Concepts Learning via Question
Answering [95.35905804211698]
We propose a competence-aware curriculum for visual concept learning in a question-answering manner.
We design a neural-symbolic concept learner for learning the visual concepts and a multi-dimensional Item Response Theory (mIRT) model for guiding the learning process.
Experimental results on CLEVR show that with a competence-aware curriculum, the proposed method achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-07-03T05:08:09Z) - Dual Policy Distillation [58.43610940026261]
Policy distillation, which transfers a teacher policy to a student policy, has achieved great success in challenging tasks of deep reinforcement learning.
In this work, we introduce dual policy distillation(DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment.
The key challenge in developing this dual learning framework is to identify the beneficial knowledge from the peer learner for contemporary learning-based reinforcement learning algorithms.
arXiv Detail & Related papers (2020-06-07T06:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.