Bring Your Own View: Graph Neural Networks for Link Prediction with
Personalized Subgraph Selection
- URL: http://arxiv.org/abs/2212.12488v1
- Date: Fri, 23 Dec 2022 17:30:19 GMT
- Title: Bring Your Own View: Graph Neural Networks for Link Prediction with
Personalized Subgraph Selection
- Authors: Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen,
Soo-Hyun Choi, Xia Hu
- Abstract summary: We introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges.
PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently.
We suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs.
- Score: 57.34881616131377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have received remarkable success in link
prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the
whole dataset and then apply GNNs to encode edge representations by leveraging
the neighborhood structure induced by the fixed subgraph. The prominence of
GNNLP methods significantly relies on the adhoc subgraph. Since node
connectivity in real-world graphs is complex, one shared subgraph is limited
for all edges. Thus, the choices of subgraphs should be personalized to
different edges. However, performing personalized subgraph selection is
nontrivial since the potential selection space grows exponentially to the scale
of edges. Besides, the inference edges are not available during training in
link prediction scenarios, so the selection process needs to be inductive. To
bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a
plug-and-play framework to automatically, personally, and inductively identify
optimal subgraphs for different edges when performing GNNLP. PS2 is
instantiated as a bi-level optimization problem that can be efficiently solved
differently. Coupling GNNLP models with PS2, we suggest a brand-new angle
towards GNNLP training: by first identifying the optimal subgraphs for edges;
and then focusing on training the inference model by using the sampled
subgraphs. Comprehensive experiments endorse the effectiveness of our proposed
method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and
SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our
code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
Related papers
- A Flexible, Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening [18.688057947275112]
Subgraph Graph Neural Networks (Subgraph GNNs) enhance the expressivity of message-passing GNNs by representing graphs as sets of subgraphs.
Previous approaches suggested processing only subsets of subgraphs, selected either randomly or via learnable sampling.
This paper introduces a new Subgraph GNNs framework to address these issues.
arXiv Detail & Related papers (2024-06-13T16:29:06Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - NESS: Node Embeddings from Static SubGraphs [0.0]
We present a framework for learning Node Embeddings from Static Subgraphs (NESS) using a graph autoencoder (GAE) in a transductive setting.
NESS is based on two key ideas: i) Partitioning the training graph to multiple static, sparse subgraphs with non-overlapping edges using random edge split during data pre-processing.
We demonstrate that NESS gives a better node representation for link prediction tasks compared to current autoencoding methods that use either the whole graph or subgraphs.
arXiv Detail & Related papers (2023-03-15T22:14:28Z) - DiP-GNN: Discriminative Pre-Training of Graph Neural Networks [49.19824331568713]
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
arXiv Detail & Related papers (2022-09-15T17:41:50Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.