wsGAT: Weighted and Signed Graph Attention Networks for Link Prediction
- URL: http://arxiv.org/abs/2109.11519v1
- Date: Tue, 21 Sep 2021 12:07:51 GMT
- Title: wsGAT: Weighted and Signed Graph Attention Networks for Link Prediction
- Authors: Marco Grassia, Giuseppe Mangioni
- Abstract summary: Graph Neural Networks (GNNs) have been widely used to learn representations on graphs and tackle many real-world problems.
We propose wsGAT, an extension of the Graph Attention Network (GAT) layers, to handle graphs with signed and weighted links.
Our results show that models with wsGAT layers outperform the ones with GCNII and SGCN layers, and that there is no loss in performance when signed weights are predicted.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have been widely used to learn representations
on graphs and tackle many real-world problems from a wide range of domains. In
this paper we propose wsGAT, an extension of the Graph Attention Network (GAT)
layers, meant to address the lack of GNNs that can handle graphs with signed
and weighted links, which are ubiquitous, for instance, in trust and
correlation networks. We first evaluate the performance of our proposal by
comparing against GCNII in the weighed link prediction task, and against SGCN
in the link sign prediction task. After that, we combine the two tasks and show
their performance on predicting the signed weight of links, and their
existence. Our results on real-world networks show that models with wsGAT
layers outperform the ones with GCNII and SGCN layers, and that there is no
loss in performance when signed weights are predicted.
Related papers
- DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks [11.809853547011704]
The paper discusses signed graphs, which model friendly or antagonistic relationships using edges marked with positive or negative signs.
The authors propose using data augmentation (DA) techniques to address these issues.
They introduce the Signed Graph Augmentation (SGA) framework, which includes a structure augmentation module to identify candidate edges.
arXiv Detail & Related papers (2024-09-29T09:13:23Z) - Graph Contrastive Learning with Generative Adversarial Network [35.564028359355596]
Graph generative adversarial networks (GANs) learn the distribution of views for Graph Contrastive Learning (GCL)
We present GACN, a novel Generative Adversarial Contrastive learning Network for graph representation learning.
We show that GACN is able to generate high-quality augmented views for GCL and is superior to twelve state-of-the-art baseline methods.
arXiv Detail & Related papers (2023-08-01T13:28:24Z) - Learnable Graph Convolutional Attention Networks [7.465923786151107]
Graph Neural Networks (GNNs) compute the message exchange between nodes by either aggregating uniformly (convolving) the features of all the neighboring nodes, or by applying a non-uniform score (attending) to the features.
Recent works have shown the strengths and weaknesses of the resulting GNN architectures, respectively, GCNs and GATs.
We introduce the graph convolutional attention layer (CAT), which relies on convolutions to compute the attention scores.
Our results demonstrate that L-CAT is able to efficiently combine different GNN layers along the network, outperforming competing methods in a wide
arXiv Detail & Related papers (2022-11-21T21:08:58Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Graph Attention Networks with Positional Embeddings [7.552100672006174]
Graph Neural Networks (GNNs) are deep learning methods which provide the current state of the art performance in node classification tasks.
We propose a framework, termed Graph Attentional Networks with Positional Embeddings (GAT-POS), to enhance GATs with positional embeddings.
We show that GAT-POS reaches remarkable improvement compared to strong GNN baselines and recent structural embedding enhanced GNNs on non-homophilic graphs.
arXiv Detail & Related papers (2021-05-09T22:13:46Z) - Benchmarking Graph Neural Networks on Link Prediction [80.2049358846658]
We benchmark several existing graph neural network (GNN) models on different datasets for link predictions.
Our experiments show these GNN architectures perform similarly on various benchmarks for link prediction tasks.
arXiv Detail & Related papers (2021-02-24T20:57:16Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Signed Graph Diffusion Network [17.20546861491478]
Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges?
We propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs.
arXiv Detail & Related papers (2020-12-28T11:08:30Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.