Shifu2: A Network Representation Learning Based Model for
Advisor-advisee Relationship Mining
- URL: http://arxiv.org/abs/2008.07097v1
- Date: Mon, 17 Aug 2020 05:40:06 GMT
- Title: Shifu2: A Network Representation Learning Based Model for
Advisor-advisee Relationship Mining
- Authors: Jiaying Liu, Feng Xia, Lei Wang, Bo Xu, Xiangjie Kong, Hanghang Tong,
and Irwin King
- Abstract summary: We propose a novel model based on Network Representation Learning (NRL), namely Shifu2.
Shifu2 takes the collaboration network as input and the identified advisor-advisee relationship as output.
We generate a large-scale academic genealogy dataset by taking advantage of Shifu2.
- Score: 82.75996880087747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advisor-advisee relationship represents direct knowledge heritage, and
such relationship may not be readily available from academic libraries and
search engines. This work aims to discover advisor-advisee relationships hidden
behind scientific collaboration networks. For this purpose, we propose a novel
model based on Network Representation Learning (NRL), namely Shifu2, which
takes the collaboration network as input and the identified advisor-advisee
relationship as output. In contrast to existing NRL models, Shifu2 considers
not only the network structure but also the semantic information of nodes and
edges. Shifu2 encodes nodes and edges into low-dimensional vectors
respectively, both of which are then utilized to identify advisor-advisee
relationships. Experimental results illustrate improved stability and
effectiveness of the proposed model over state-of-the-art methods. In addition,
we generate a large-scale academic genealogy dataset by taking advantage of
Shifu2.
Related papers
- Kolmogorov-Arnold Network Autoencoders [0.0]
Kolmogorov-Arnold Networks (KANs) are promising alternatives to Multi-Layer Perceptrons (MLPs)
KANs align closely with the Kolmogorov-Arnold representation theorem, potentially enhancing both model accuracy and interpretability.
Our results demonstrate that KAN-based autoencoders achieve competitive performance in terms of reconstruction accuracy.
arXiv Detail & Related papers (2024-10-02T22:56:00Z) - Modeling Balanced Explicit and Implicit Relations with Contrastive
Learning for Knowledge Concept Recommendation in MOOCs [1.0377683220196874]
Existing methods rely on the explicit relations between users and knowledge concepts for recommendation.
There are numerous implicit relations generated within the users' learning activities on the MOOC platforms.
We propose a novel framework based on contrastive learning, which can represent and balance the explicit and implicit relations.
arXiv Detail & Related papers (2024-02-13T07:12:44Z) - Privacy-Preserving Representation Learning for Text-Attributed Networks
with Simplicial Complexes [24.82096971322501]
I will study learning network representations with text attributes for simplicial complexes (RT4SC) via simplicial neural networks (SNNs)
I will conduct research on two potential attacks on the representation outputs from SNNs.
I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs.
arXiv Detail & Related papers (2023-02-09T00:32:06Z) - A Node-collaboration-informed Graph Convolutional Network for Precise
Representation to Undirected Weighted Graphs [10.867583522217473]
A graph convolutional network (GCN) is widely adopted to perform representation learning to a weighted graph (UWG)
This study proposes to model the node collaborations via a symmetric latent factor analysis model, and then regards it as a node-collaboration module for supplementing the collaboration loss in a GCN.
Based on this idea, a Node-collaboration-informed Graph Convolutional Network (NGCN) is proposed with three-fold ideas.
arXiv Detail & Related papers (2022-11-30T02:20:19Z) - On Representation Knowledge Distillation for Graph Neural Networks [15.82821940784549]
We study whether preserving the global topology of how the teacher embeds graph data can be a more effective distillation objective for GNNs.
We propose two new approaches which better preserve global topology: (1) Global Structure Preserving loss (GSP) and (2) Graph Contrastive Representation Distillation (G-CRD)
arXiv Detail & Related papers (2021-11-09T06:22:27Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z) - Self-supervised Graph Learning for Recommendation [69.98671289138694]
We explore self-supervised learning on user-item graph for recommendation.
An auxiliary self-supervised task reinforces node representation learning via self-discrimination.
Empirical studies on three benchmark datasets demonstrate the effectiveness of SGL.
arXiv Detail & Related papers (2020-10-21T06:35:26Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - DEPARA: Deep Attribution Graph for Deep Knowledge Transferability [91.06106524522237]
We propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.
In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN.
The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs.
arXiv Detail & Related papers (2020-03-17T02:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.