Signed Graph Diffusion Network
- URL: http://arxiv.org/abs/2012.14191v1
- Date: Mon, 28 Dec 2020 11:08:30 GMT
- Title: Signed Graph Diffusion Network
- Authors: Jinhong Jung, Jaemin Yoo, U Kang
- Abstract summary: Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges?
We propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs.
- Score: 17.20546861491478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a signed social graph, how can we learn appropriate node
representations to infer the signs of missing edges? Signed social graphs have
received considerable attention to model trust relationships. Learning node
representations is crucial to effectively analyze graph data, and various
techniques such as network embedding and graph convolutional network (GCN) have
been proposed for learning signed graphs. However, traditional network
embedding methods are not end-to-end for a specific task such as link sign
prediction, and GCN-based methods suffer from a performance degradation problem
when their depth increases. In this paper, we propose Signed Graph Diffusion
Network (SGDNet), a novel graph neural network that achieves end-to-end node
representation learning for link sign prediction in signed social graphs. We
propose a random walk technique specially designed for signed graphs so that
SGDNet effectively diffuses hidden node features. Through extensive
experiments, we demonstrate that SGDNet outperforms state-of-the-art models in
terms of link sign prediction accuracy.
Related papers
- DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks [11.809853547011704]
The paper discusses signed graphs, which model friendly or antagonistic relationships using edges marked with positive or negative signs.
The authors propose using data augmentation (DA) techniques to address these issues.
They introduce the Signed Graph Augmentation (SGA) framework, which includes a structure augmentation module to identify candidate edges.
arXiv Detail & Related papers (2024-09-29T09:13:23Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - wsGAT: Weighted and Signed Graph Attention Networks for Link Prediction [0.0]
Graph Neural Networks (GNNs) have been widely used to learn representations on graphs and tackle many real-world problems.
We propose wsGAT, an extension of the Graph Attention Network (GAT) layers, to handle graphs with signed and weighted links.
Our results show that models with wsGAT layers outperform the ones with GCNII and SGCN layers, and that there is no loss in performance when signed weights are predicted.
arXiv Detail & Related papers (2021-09-21T12:07:51Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Learning Graph Representations [0.0]
Graph Neural Networks (GNNs) are efficient ways to get insight into large dynamic graph datasets.
In this paper, we discuss the graph convolutional neural networks graph autoencoders and Social-temporal graph neural networks.
arXiv Detail & Related papers (2021-02-03T12:07:55Z) - Self-supervised Graph Representation Learning via Bootstrapping [35.56360622521721]
We propose a new self-supervised graph representation method: deep graph bootstrapping(DGB)
DGB consists of two neural networks: online and target networks, and the input of them are different augmented views of the initial graph.
As a result, the proposed DGB can learn graph representation without negative examples in an unsupervised manner.
arXiv Detail & Related papers (2020-11-10T14:47:29Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.