MG-GCN: Fast and Effective Learning with Mix-grained Aggregators for
Training Large Graph Convolutional Networks
- URL: http://arxiv.org/abs/2011.09900v1
- Date: Tue, 17 Nov 2020 14:51:57 GMT
- Title: MG-GCN: Fast and Effective Learning with Mix-grained Aggregators for
Training Large Graph Convolutional Networks
- Authors: Tao Huang, Yihan Zhang, Jiajing Wu, Junyuan Fang, Zibin Zheng
- Abstract summary: Graph convolutional networks (GCNs) generate the embeddings of nodes by aggregating the information of their neighbors layer by layer.
The high computational and memory cost of GCNs makes it infeasible for training on large graphs.
A new model, named Mix-grained GCN (MG-GCN), achieves state-of-the-art performance in terms of accuracy, training speed, convergence speed, and memory cost.
- Score: 20.07942308916373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional networks (GCNs) have been employed as a kind of
significant tool on many graph-based applications recently. Inspired by
convolutional neural networks (CNNs), GCNs generate the embeddings of nodes by
aggregating the information of their neighbors layer by layer. However, the
high computational and memory cost of GCNs due to the recursive neighborhood
expansion across GCN layers makes it infeasible for training on large graphs.
To tackle this issue, several sampling methods during the process of
information aggregation have been proposed to train GCNs in a mini-batch
Stochastic Gradient Descent (SGD) manner. Nevertheless, these sampling
strategies sometimes bring concerns about insufficient information collection,
which may hinder the learning performance in terms of accuracy and convergence.
To tackle the dilemma between accuracy and efficiency, we propose to use
aggregators with different granularities to gather neighborhood information in
different layers. Then, a degree-based sampling strategy, which avoids the
exponential complexity, is constructed for sampling a fixed number of nodes.
Combining the above two mechanisms, the proposed model, named Mix-grained GCN
(MG-GCN) achieves state-of-the-art performance in terms of accuracy, training
speed, convergence speed, and memory cost through a comprehensive set of
experiments on four commonly used benchmark datasets and a new Ethereum
dataset.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Neighborhood Convolutional Network: A New Paradigm of Graph Neural
Networks for Node Classification [12.062421384484812]
Graph Convolutional Network (GCN) decouples neighborhood aggregation and feature transformation in each convolutional layer.
In this paper, we propose a new paradigm of GCN, termed Neighborhood Convolutional Network (NCN)
In this way, the model could inherit the merit of decoupled GCN for aggregating neighborhood information, at the same time, develop much more powerful feature learning modules.
arXiv Detail & Related papers (2022-11-15T02:02:51Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Flattened Graph Convolutional Networks For Recommendation [18.198536511983452]
This paper proposes the flattened GCN(FlatGCN) model, which is able to achieve superior performance with remarkably less complexity compared with existing models.
First, we propose a simplified but powerful GCN architecture which aggregates the neighborhood information using one flattened GCN layer.
Second, we propose an informative neighbor-infomax sampling method to select the most valuable neighbors by measuring the correlation among neighboring nodes.
Third, we propose a layer ensemble technique which improves the expressiveness of the learned representations by assembling the layer-wise neighborhood representations at the final layer.
arXiv Detail & Related papers (2022-09-25T12:53:50Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Non-Recursive Graph Convolutional Networks [33.459371861932574]
We propose a novel architecture named Non-Recursive Graph Convolutional Network (NRGCN) to improve both the training efficiency and the learning performance of GCNs.
NRGCN represents different hops of neighbors for each node based on inner-layer aggregation and layer-independent sampling.
In this way, each node can be directly represented by concatenating the information extracted independently from each hop of its neighbors.
arXiv Detail & Related papers (2021-05-09T08:12:18Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.