Two-stage Training of Graph Neural Networks for Graph Classification
- URL: http://arxiv.org/abs/2011.05097v4
- Date: Fri, 8 Apr 2022 07:48:03 GMT
- Title: Two-stage Training of Graph Neural Networks for Graph Classification
- Authors: Manh Tuan Do, Noseong Park, Kijung Shin
- Abstract summary: Graph neural networks (GNNs) have received massive attention in the field of machine learning on graphs.
In this work, we propose a two-stage training framework based on triplet loss.
We demonstrate the consistent improvement in accuracy and utilization of each GNN's allocated capacity over the original training method.
- Score: 24.11791971161264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have received massive attention in the field of
machine learning on graphs. Inspired by the success of neural networks, a line
of research has been conducted to train GNNs to deal with various tasks, such
as node classification, graph classification, and link prediction. In this
work, our task of interest is graph classification. Several GNN models have
been proposed and shown great accuracy in this task. However, the question is
whether usual training methods fully realize the capacity of the GNN models.
In this work, we propose a two-stage training framework based on triplet
loss. In the first stage, GNN is trained to map each graph to a Euclidean-space
vector so that graphs of the same class are close while those of different
classes are mapped far apart. Once graphs are well-separated based on labels, a
classifier is trained to distinguish between different classes. This method is
generic in the sense that it is compatible with any GNN model. By adapting five
GNN models to our method, we demonstrate the consistent improvement in accuracy
and utilization of each GNN's allocated capacity over the original training
method of each model up to 5.4\% points in 12 datasets.
Related papers
- You do not have to train Graph Neural Networks at all on text-attributed graphs [25.044734252779975]
We introduce TrainlessGNN, a linear GNN model capitalizing on the observation that text encodings from the same class often cluster together in a linear subspace.
Our experiments reveal that our trainless models can either match or even surpass their conventionally trained counterparts.
arXiv Detail & Related papers (2024-04-17T02:52:11Z) - Classifying Nodes in Graphs without GNNs [50.311528896010785]
We propose a fully GNN-free approach for node classification, not requiring them at train or test time.
Our method consists of three key components: smoothness constraints, pseudo-labeling iterations and neighborhood-label histograms.
arXiv Detail & Related papers (2024-02-08T18:59:30Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Meta-Inductive Node Classification across Graphs [6.0471030308057285]
We propose a novel meta-inductive framework called MI-GNN to customize the inductive model to each graph.
MI-GNN does not directly learn an inductive model; it learns the general knowledge of how to train a model for semi-supervised node classification on new graphs.
Extensive experiments on five real-world graph collections demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-05-14T09:16:28Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.