Residual Network and Embedding Usage: New Tricks of Node Classification
with Graph Convolutional Networks
- URL: http://arxiv.org/abs/2105.08330v1
- Date: Tue, 18 May 2021 07:52:51 GMT
- Title: Residual Network and Embedding Usage: New Tricks of Node Classification
with Graph Convolutional Networks
- Authors: Huixuan Chi, Yuying Wang, Qinfen Hao, Hong Xia
- Abstract summary: We first summarize some existing effective tricks used in GCNs mini-batch training.
Based on this, two novel tricks named GCN_res Framework and Embedding Usage are proposed.
Experiments on Open Graph Benchmark show that, by combining these techniques, the test accuracy of various GCNs increases by 1.21%2.84%.
- Score: 0.38233569758620045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Networks (GCNs) and subsequent variants have been
proposed to solve tasks on graphs, especially node classification tasks. In the
literature, however, most tricks or techniques are either briefly mentioned as
implementation details or only visible in source code. In this paper, we first
summarize some existing effective tricks used in GCNs mini-batch training.
Based on this, two novel tricks named GCN_res Framework and Embedding Usage are
proposed by leveraging residual network and pre-trained embedding to improve
baseline's test accuracy in different datasets. Experiments on Open Graph
Benchmark (OGB) show that, by combining these techniques, the test accuracy of
various GCNs increases by 1.21%~2.84%. We open source our implementation at
https://github.com/ytchx1999/PyG-OGB-Tricks.
Related papers
- Graph Convolutional Network For Semi-supervised Node Classification With Subgraph Sketching [0.27624021966289597]
We propose the Graph-Learning-Dual Graph Convolutional Neural Network called GLDGCN.
We apply GLDGCN to the semi-supervised node classification task.
Compared with the baseline methods, we achieve higher classification accuracy on three citation networks.
arXiv Detail & Related papers (2024-04-19T09:08:12Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - SeedGNN: Graph Neural Networks for Supervised Seeded Graph Matching [23.60042970011082]
This paper proposes a new supervised approach to match unseen graphs with only a few seeds.
Our architecture incorporates several novel designs, inspired by theoretical studies of seeded graph matching.
We evaluate SeedGNN on synthetic and real-world graphs and demonstrate significant performance improvements over both non-learning and learning algorithms.
arXiv Detail & Related papers (2022-05-26T23:50:42Z) - DOTIN: Dropping Task-Irrelevant Nodes for GNNs [119.17997089267124]
Recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning.
We design a new approach called DOTIN (underlineDrunderlineopping underlineTask-underlineIrrelevant underlineNodes) to reduce the size of graphs.
Our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance.
arXiv Detail & Related papers (2022-04-28T12:00:39Z) - Scalable Graph Neural Networks via Bidirectional Propagation [89.70835710988395]
Graph Neural Networks (GNN) is an emerging field for learning on non-Euclidean data.
This paper presents GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vectors and the training/testing nodes.
An empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time.
arXiv Detail & Related papers (2020-10-29T08:55:33Z) - Bi-GCN: Binary Graph Convolutional Network [57.733849700089955]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features.
Our Bi-GCN can reduce the memory consumption by an average of 30x for both the network parameters and input data, and accelerate the inference speed by an average of 47x.
arXiv Detail & Related papers (2020-10-15T07:26:23Z) - Simple and Deep Graph Convolutional Networks [63.76221532439285]
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data.
Despite their success, most of the current GCN models are shallow, due to the em over-smoothing problem.
We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques.
arXiv Detail & Related papers (2020-07-04T16:18:06Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.