Bi-CLKT: Bi-Graph Contrastive Learning based Knowledge Tracing
- URL: http://arxiv.org/abs/2201.09020v1
- Date: Sat, 22 Jan 2022 11:07:21 GMT
- Title: Bi-CLKT: Bi-Graph Contrastive Learning based Knowledge Tracing
- Authors: Xiangyu Song, Jianxin Li, Qi Lei, Wei Zhao, Yunliang Chen, Ajmal Mian
- Abstract summary: The goal of Knowledge Tracing is to estimate how well students have mastered a concept based on their historical learning of related exercises.
With the recent rise of deep learning, Deep Knowledge Tracing has utilised Recurrent Neural Networks (RNNs) to accomplish this task with some success.
Other works have attempted to introduce Graph Neural Networks (GNNs) and redefine the task accordingly to achieve significant improvements.
We propose a Bi-Graph Contrastive Learning based Knowledge Tracing (Bi-CLKT) to address these limitations.
- Score: 39.92424205497689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of Knowledge Tracing (KT) is to estimate how well students have
mastered a concept based on their historical learning of related exercises. The
benefit of knowledge tracing is that students' learning plans can be better
organised and adjusted, and interventions can be made when necessary. With the
recent rise of deep learning, Deep Knowledge Tracing (DKT) has utilised
Recurrent Neural Networks (RNNs) to accomplish this task with some success.
Other works have attempted to introduce Graph Neural Networks (GNNs) and
redefine the task accordingly to achieve significant improvements. However,
these efforts suffer from at least one of the following drawbacks: 1) they pay
too much attention to details of the nodes rather than to high-level semantic
information; 2) they struggle to effectively establish spatial associations and
complex structures of the nodes; and 3) they represent either concepts or
exercises only, without integrating them. Inspired by recent advances in
self-supervised learning, we propose a Bi-Graph Contrastive Learning based
Knowledge Tracing (Bi-CLKT) to address these limitations. Specifically, we
design a two-layer contrastive learning scheme based on an
"exercise-to-exercise" (E2E) relational subgraph. It involves node-level
contrastive learning of subgraphs to obtain discriminative representations of
exercises, and graph-level contrastive learning to obtain discriminative
representations of concepts. Moreover, we designed a joint contrastive loss to
obtain better representations and hence better prediction performance. Also, we
explored two different variants, using RNN and memory-augmented neural networks
as the prediction layer for comparison to obtain better representations of
exercises and concepts respectively. Extensive experiments on four real-world
datasets show that the proposed Bi-CLKT and its variants outperform other
baseline models.
Related papers
- Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method
for Few-shot Node Tasks [68.60884768323739]
We introduce Contrastive Few-Shot Node Classification (COLA)
COLA uses graph augmentations to identify semantically similar nodes, which enables the construction of meta-tasks without the need for label information.
Through extensive experiments, we validate the essentiality of each component in our design and demonstrate that COLA achieves new state-of-the-art on all tasks.
arXiv Detail & Related papers (2023-09-19T07:24:10Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets
Spiking Neural Networks [35.35462459134551]
SpikeGCL is a novel framework to learn binarized 1-bit representations for graphs.
We provide theoretical guarantees to demonstrate that SpikeGCL has comparable with its full-precision counterparts.
arXiv Detail & Related papers (2023-05-30T16:03:11Z) - Hierarchical Contrastive Learning Enhanced Heterogeneous Graph Neural
Network [59.860534520941485]
Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN)
Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels.
In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo.
arXiv Detail & Related papers (2023-04-24T16:17:21Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - SMARTQUERY: An Active Learning Framework for Graph Neural Networks
through Hybrid Uncertainty Reduction [25.77052028238513]
We propose a framework to learn a graph neural network with very few labeled nodes using a hybrid uncertainty reduction function.
We demonstrate the competitive performance of our method against state-of-the-arts on very few labeled data.
arXiv Detail & Related papers (2022-12-02T20:49:38Z) - DGEKT: A Dual Graph Ensemble Learning Method for Knowledge Tracing [20.71423236895509]
We present a novel Dual Graph Ensemble learning method for Knowledge Tracing (DGEKT)
DGEKT establishes a dual graph structure of students' learning interactions to capture the heterogeneous exercise-concept associations.
Online knowledge distillation provides its predictions on all exercises as extra supervision for better modeling ability.
arXiv Detail & Related papers (2022-11-23T11:37:35Z) - Graph Neural Network with Curriculum Learning for Imbalanced Node
Classification [21.085314408929058]
Graph Neural Network (GNN) is an emerging technique for graph-based learning tasks such as node classification.
In this work, we reveal the vulnerability of GNN to the imbalance of node labels.
We propose a novel graph neural network framework with curriculum learning (GNN-CL) consisting of two modules.
arXiv Detail & Related papers (2022-02-05T10:46:11Z) - Exploiting Contextual Information with Deep Neural Networks [5.787117733071416]
We show that contextual information can be exploited in 2 fundamentally different ways: implicitly and explicitly.
In this thesis, we show that contextual information can be exploited in 2 fundamentally different ways: implicitly and explicitly.
arXiv Detail & Related papers (2020-06-21T03:40:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.