DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
- URL: http://arxiv.org/abs/2209.07499v1
- Date: Thu, 15 Sep 2022 17:41:50 GMT
- Title: DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
- Authors: Simiao Zuo, Haoming Jiang, Qingyu Yin, Xianfeng Tang, Bing Yin, Tuo
Zhao
- Abstract summary: Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
- Score: 49.19824331568713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural network (GNN) pre-training methods have been proposed to enhance
the power of GNNs. Specifically, a GNN is first pre-trained on a large-scale
unlabeled graph and then fine-tuned on a separate small labeled graph for
downstream applications, such as node classification. One popular pre-training
method is to mask out a proportion of the edges, and a GNN is trained to
recover them. However, such a generative method suffers from graph mismatch.
That is, the masked graph inputted to the GNN deviates from the original graph.
To alleviate this issue, we propose DiP-GNN (Discriminative Pre-training of
Graph Neural Networks). Specifically, we train a generator to recover
identities of the masked edges, and simultaneously, we train a discriminator to
distinguish the generated edges from the original graph's edges. In our
framework, the graph seen by the discriminator better matches the original
graph because the generator can recover a proportion of the masked edges.
Extensive experiments on large-scale homogeneous and heterogeneous graphs
demonstrate the effectiveness of the proposed framework.
Related papers
- Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - SoftEdge: Regularizing Graph Classification with Random Soft Edges [18.165965620873745]
Graph data augmentation plays a vital role in regularizing Graph Neural Networks (GNNs)
Simple edge and node manipulations can create graphs with an identical structure or indistinguishable structures to message passing GNNs but of conflict labels.
We propose SoftEdge, which assigns random weights to a portion of the edges of a given graph to construct dynamic neighborhoods over the graph.
arXiv Detail & Related papers (2022-04-21T20:12:36Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling [31.812988573924674]
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties.
We propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology.
NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.
arXiv Detail & Related papers (2019-10-24T21:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.