Domain Adaption for Knowledge Tracing
- URL: http://arxiv.org/abs/2001.04841v1
- Date: Tue, 14 Jan 2020 15:04:48 GMT
- Title: Domain Adaption for Knowledge Tracing
- Authors: Song Cheng, Qi Liu, Enhong Chen
- Abstract summary: We propose a novel adaptable framework, namely knowledge tracing (AKT) to address the DAKT problem.
For the first aspect, we incorporate the educational characteristics (e.g., slip, guess, question texts) based on the deep knowledge tracing (DKT) to obtain a good performed knowledge tracing model.
For the second aspect, we propose and adopt three domain adaptation processes. First, we pre-train an auto-encoder to select useful source instances for target model training.
- Score: 65.86619804954283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of online education system, knowledge tracing
which aims at predicting students' knowledge state is becoming a critical and
fundamental task in personalized education. Traditionally, existing methods are
domain-specified. However, there are a larger number of domains (e.g.,
subjects, schools) in the real world and the lacking of data in some domains,
how to utilize the knowledge and information in other domains to help train a
knowledge tracing model for target domains is increasingly important. We refer
to this problem as domain adaptation for knowledge tracing (DAKT) which
contains two aspects: (1) how to achieve great knowledge tracing performance in
each domain. (2) how to transfer good performed knowledge tracing model between
domains. To this end, in this paper, we propose a novel adaptable framework,
namely adaptable knowledge tracing (AKT) to address the DAKT problem.
Specifically, for the first aspect, we incorporate the educational
characteristics (e.g., slip, guess, question texts) based on the deep knowledge
tracing (DKT) to obtain a good performed knowledge tracing model. For the
second aspect, we propose and adopt three domain adaptation processes. First,
we pre-train an auto-encoder to select useful source instances for target model
training. Second, we minimize the domain-specific knowledge state distribution
discrepancy under maximum mean discrepancy (MMD) measurement to achieve domain
adaptation. Third, we adopt fine-tuning to deal with the problem that the
output dimension of source and target domain are different to make the model
suitable for target domains. Extensive experimental results on two private
datasets and seven public datasets clearly prove the effectiveness of AKT for
great knowledge tracing performance and its superior transferable ability.
Related papers
- Adapting to Distribution Shift by Visual Domain Prompt Generation [34.19066857066073]
We adapt a model at test-time using a few unlabeled data to address distribution shifts.
We build a knowledge bank to learn the transferable knowledge from source domains.
The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and DomainNet.
arXiv Detail & Related papers (2024-05-05T02:44:04Z) - Named Entity Recognition Under Domain Shift via Metric Learning for Life Sciences [55.185456382328674]
We investigate the applicability of transfer learning for enhancing a named entity recognition model.
Our model consists of two stages: 1) entity grouping in the source domain, which incorporates knowledge from annotated events to establish relations between entities, and 2) entity discrimination in the target domain, which relies on pseudo labeling and contrastive learning to enhance discrimination between the entities in the two domains.
arXiv Detail & Related papers (2024-01-19T03:49:28Z) - Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - Unsupervised Sentiment Analysis by Transferring Multi-source Knowledge [22.880509132587807]
We propose a two-stage domain adaptation framework for sentiment analysis.
In the first stage, a multi-task methodology-based shared private architecture is employed to explicitly model the domain common features.
In the second stage, two elaborate mechanisms are embedded in the shared private architecture to transfer knowledge from multiple source domains.
arXiv Detail & Related papers (2021-05-09T03:02:19Z) - Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
Reliable Transfer for Cardiac Segmentation [69.09432302497116]
We propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++.
We design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain.
In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data.
arXiv Detail & Related papers (2021-01-07T05:17:38Z) - Learning causal representations for robust domain adaptation [31.261956776418618]
In many real-world applications, target domain data may not always be available.
In this paper, we study the cases where at the training phase the target domain data is unavailable.
We propose a novel Causal AutoEncoder (CAE), which integrates deep autoencoder and causal structure learning into a unified model.
arXiv Detail & Related papers (2020-11-12T11:24:03Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.