SGA: A Graph Augmentation Method for Signed Graph Neural Networks
- URL: http://arxiv.org/abs/2310.09705v1
- Date: Sun, 15 Oct 2023 02:19:07 GMT
- Title: SGA: A Graph Augmentation Method for Signed Graph Neural Networks
- Authors: Zeyu Zhang, Shuyan Wan, Sijie Wang, Xianda Zheng, Xinrui Zhang, Kaiqi
Zhao, Jiamou Liu, Dong Hao
- Abstract summary: Signed Graph Neural Networks (SGNNs) are vital for analyzing complex patterns in real-world signed graphs containing positive and negative links.
We introduce the novel Signed Graph Augmentation framework (SGA), comprising three main components.
Our method outperforms baselines by up to 22.2% in AUC for SGCN on Wiki-RfA, 33.3% in F1-binary, 48.8% in F1-micro, and 36.3% in F1-macro for GAT on Bitcoin-alpha in link sign prediction.
- Score: 14.441926210101316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signed Graph Neural Networks (SGNNs) are vital for analyzing complex patterns
in real-world signed graphs containing positive and negative links. However,
three key challenges hinder current SGNN-based signed graph representation
learning: sparsity in signed graphs leaves latent structures undiscovered,
unbalanced triangles pose representation difficulties for SGNN models, and
real-world signed graph datasets often lack supplementary information like node
labels and features. These constraints limit the potential of SGNN-based
representation learning. We address these issues with data augmentation
techniques. Despite many graph data augmentation methods existing for unsigned
graphs, none are tailored for signed graphs. Our paper introduces the novel
Signed Graph Augmentation framework (SGA), comprising three main components.
First, we employ the SGNN model to encode the signed graph, extracting latent
structural information for candidate augmentation structures. Second, we
evaluate these candidate samples (edges) and select the most beneficial ones
for modifying the original training set. Third, we propose a novel augmentation
perspective that assigns varying training difficulty to training samples,
enabling the design of a new training strategy. Extensive experiments on six
real-world datasets (Bitcoin-alpha, Bitcoin-otc, Epinions, Slashdot, Wiki-elec,
and Wiki-RfA) demonstrate that SGA significantly improves performance across
multiple benchmarks. Our method outperforms baselines by up to 22.2% in AUC for
SGCN on Wiki-RfA, 33.3% in F1-binary, 48.8% in F1-micro, and 36.3% in F1-macro
for GAT on Bitcoin-alpha in link sign prediction.
Related papers
- DropEdge not Foolproof: Effective Augmentation Method for Signed Graph Neural Networks [11.809853547011704]
The paper discusses signed graphs, which model friendly or antagonistic relationships using edges marked with positive or negative signs.
The authors propose using data augmentation (DA) techniques to address these issues.
They introduce the Signed Graph Augmentation (SGA) framework, which includes a structure augmentation module to identify candidate edges.
arXiv Detail & Related papers (2024-09-29T09:13:23Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Self-attention Dual Embedding for Graphs with Heterophily [6.803108335002346]
A number of real-world graphs are heterophilic, and this leads to much lower classification accuracy using standard GNNs.
We design a novel GNN which is effective for both heterophilic and homophilic graphs.
We evaluate our algorithm on real-world graphs containing thousands to millions of nodes and show that we achieve state-of-the-art results.
arXiv Detail & Related papers (2023-05-28T09:38:28Z) - Measuring the Privacy Leakage via Graph Reconstruction Attacks on
Simplicial Neural Networks (Student Abstract) [25.053461964775778]
We study whether graph representations can be inverted to recover the graph used to generate them via graph reconstruction attack (GRA)
We propose a GRA that recovers a graph's adjacency matrix from the representations via a graph decoder.
We find that the SNN outputs reveal the lowest privacy-preserving ability to defend the GRA.
arXiv Detail & Related papers (2023-02-08T23:40:24Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.