R-VGAE: Relational-variational Graph Autoencoder for Unsupervised
Prerequisite Chain Learning
- URL: http://arxiv.org/abs/2004.10610v1
- Date: Wed, 22 Apr 2020 14:48:03 GMT
- Title: R-VGAE: Relational-variational Graph Autoencoder for Unsupervised
Prerequisite Chain Learning
- Authors: Irene Li, Alexander Fabbri, Swapnil Hingmire and Dragomir Radev
- Abstract summary: We propose a model called Graph AutoEncoder (VGA-E) to predict concept relations within a graph consisting of concept resource nodes.
Results show that our unsupervised approach outperforms graph-based semi-supervised methods and other baseline methods by up to 9.77% and 10.47% in terms of prerequisite relation prediction accuracy and F1 score.
Our method is notably the first graph-based model that attempts to make use of deep learning representations for the task of unsupervised prerequisite learning.
- Score: 83.13634692459486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of concept prerequisite chain learning is to automatically determine
the existence of prerequisite relationships among concept pairs. In this paper,
we frame learning prerequisite relationships among concepts as an unsupervised
task with no access to labeled concept pairs during training. We propose a
model called the Relational-Variational Graph AutoEncoder (R-VGAE) to predict
concept relations within a graph consisting of concept and resource nodes.
Results show that our unsupervised approach outperforms graph-based
semi-supervised methods and other baseline methods by up to 9.77% and 10.47% in
terms of prerequisite relation prediction accuracy and F1 score. Our method is
notably the first graph-based model that attempts to make use of deep learning
representations for the task of unsupervised prerequisite learning. We also
expand an existing corpus which totals 1,717 English Natural Language
Processing (NLP)-related lecture slide files and manual concept pair
annotations over 322 topics.
Related papers
- A Condensed Transition Graph Framework for Zero-shot Link Prediction
with Large Language Models [22.089751438495956]
We introduce a Condensed Transition Graph Framework for Zero-Shot Link Prediction (CTLP)
CTLP encodes all the paths' information in linear time complexity to predict unseen relations between entities.
Our proposed CTLP method achieves state-of-the-art performance on three standard ZSLP datasets.
arXiv Detail & Related papers (2024-02-16T16:02:33Z) - Universal Link Predictor By In-Context Learning on Graphs [27.394215950768643]
We introduce the Universal Link Predictor (UniLP), a novel model that combines the generalizability of approaches with the pattern learning capabilities of parametric models.
UniLP is designed to autonomously identify connectivity patterns across diverse graphs, ready for immediate application to any unseen graph dataset without targeted training.
arXiv Detail & Related papers (2024-02-12T15:52:27Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Concept Prerequisite Relation Prediction by Using Permutation-Equivariant Directed Graph Neural Networks [3.1688996975958306]
CPRP, concept prerequisite relation prediction, is a fundamental task in using AI for education.
We present a permutation-equivariant directed GNN model by introducing the Weisfeiler-Lehman test into directed GNN learning.
Our model delivers better prediction performance than the state-of-the-art methods.
arXiv Detail & Related papers (2023-12-15T14:01:56Z) - SMiLE: Schema-augmented Multi-level Contrastive Learning for Knowledge
Graph Link Prediction [28.87290783250351]
Link prediction is the task of inferring missing links between entities in knowledge graphs.
We propose a novel Multi-level contrastive LEarning framework (SMiLE) to conduct knowledge graph link prediction.
arXiv Detail & Related papers (2022-10-10T17:40:19Z) - Tensor Composition Net for Visual Relationship Prediction [115.14829858763399]
We present a novel Composition Network (TCN) to predict visual relationships in images.
The key idea of our TCN is to exploit the low rank property of the visual relationship tensor.
We show our TCN's image-level visual relationship prediction provides a simple and efficient mechanism for relation-based image retrieval.
arXiv Detail & Related papers (2020-12-10T06:27:20Z) - PPKE: Knowledge Representation Learning by Path-based Pre-training [43.41597219004598]
We propose a Path-based Pre-training model to learn Knowledge Embeddings, called PPKE.
Our model achieves state-of-the-art results on several benchmark datasets for link prediction and relation prediction tasks.
arXiv Detail & Related papers (2020-12-07T10:29:30Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.