Multi-View MOOC Quality Evaluation via Information-Aware Graph
Representation Learning
- URL: http://arxiv.org/abs/2301.01593v1
- Date: Sun, 1 Jan 2023 10:27:06 GMT
- Title: Multi-View MOOC Quality Evaluation via Information-Aware Graph
Representation Learning
- Authors: Lu Jiang and Yibin Wang and Jianan Wang and Pengyang Wang and Minghao
Yin
- Abstract summary: We develop an Information-aware Graph Representation Learning(IaGRL) for multi-view MOOC quality evaluation.
We first build a MOOC Heterogeneous Network (HIN) to represent the interactions and relationships among entities in MOOC platforms.
And then we decompose the MOOC HIN into multiple single-relation graphs based on meta-paths to depict the multi-view semantics of courses.
- Score: 26.723385384507274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the problem of MOOC quality evaluation which is
essential for improving the course materials, promoting students' learning
efficiency, and benefiting user services. While achieving promising
performances, current works still suffer from the complicated interactions and
relationships of entities in MOOC platforms. To tackle the challenges, we
formulate the problem as a course representation learning task-based and
develop an Information-aware Graph Representation Learning(IaGRL) for
multi-view MOOC quality evaluation. Specifically, We first build a MOOC
Heterogeneous Network (HIN) to represent the interactions and relationships
among entities in MOOC platforms. And then we decompose the MOOC HIN into
multiple single-relation graphs based on meta-paths to depict the multi-view
semantics of courses. The course representation learning can be further
converted to a multi-view graph representation task. Different from traditional
graph representation learning, the learned course representations are expected
to match the following three types of validity: (1) the agreement on
expressiveness between the raw course portfolio and the learned course
representations; (2) the consistency between the representations in each view
and the unified representations; (3) the alignment between the course and MOOC
platform representations. Therefore, we propose to exploit mutual information
for preserving the validity of course representations. We conduct extensive
experiments over real-world MOOC datasets to demonstrate the effectiveness of
our proposed method.
Related papers
- Visual Commonsense based Heterogeneous Graph Contrastive Learning [79.22206720896664]
We propose a heterogeneous graph contrastive learning method to better finish the visual reasoning task.
Our method is designed as a plug-and-play way, so that it can be quickly and easily combined with a wide range of representative methods.
arXiv Detail & Related papers (2023-11-11T12:01:18Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Deep Embedded Multi-View Clustering via Jointly Learning Latent
Representations and Graphs [13.052394521739192]
We propose Deep Embedded Multi-view Clustering via Jointly Learning Latent Representations and Graphs (DMVCJ)
By learning the latent graphs and feature representations jointly, the graph convolution network (GCN) technique becomes available for our model.
Based on the adjacency relations of nodes shown in the latent graphs, we design a sample-weighting strategy to alleviate the noisy issue.
arXiv Detail & Related papers (2022-05-08T07:40:21Z) - MOOCRep: A Unified Pre-trained Embedding of MOOC Entities [4.0963355240233446]
We propose to learn pre-trained representations of MOOC entities using abundant unlabeled data from the structure of MOOCs.
Our experiments reveal that MOOCRep's embeddings outperform state-of-the-art representation learning methods on two tasks important for education community.
arXiv Detail & Related papers (2021-07-12T00:11:25Z) - Exploiting Emotional Dependencies with Graph Convolutional Networks for
Facial Expression Recognition [31.40575057347465]
This paper proposes a novel multi-task learning framework to recognize facial expressions in-the-wild.
A shared feature representation is learned for both discrete and continuous recognition in a MTL setting.
The results of our experiments show that our method outperforms the current state-of-the-art methods on discrete FER.
arXiv Detail & Related papers (2021-06-07T10:20:05Z) - Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph
Representation Learning [48.09362183184101]
We propose a novel self-supervised approach to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning.
Our method achieves new state-of-the-art results and surpasses some semi-supervised counterparts by large margins.
arXiv Detail & Related papers (2021-05-12T14:20:13Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Pre-training Graph Transformer with Multimodal Side Information for
Recommendation [82.4194024706817]
We propose a pre-training strategy to learn item representations by considering both item side information and their relationships.
We develop a novel sampling algorithm named MCNSampling to select contextual neighbors for each item.
The proposed Pre-trained Multimodal Graph Transformer (PMGT) learns item representations with two objectives: 1) graph structure reconstruction, and 2) masked node feature reconstruction.
arXiv Detail & Related papers (2020-10-23T10:30:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.