Flattened Graph Convolutional Networks For Recommendation
- URL: http://arxiv.org/abs/2210.07769v1
- Date: Sun, 25 Sep 2022 12:53:50 GMT
- Title: Flattened Graph Convolutional Networks For Recommendation
- Authors: Yue Xu, Hao Chen, Zengde Deng, Yuanchen Bei, Feiran Huang
- Abstract summary: This paper proposes the flattened GCN(FlatGCN) model, which is able to achieve superior performance with remarkably less complexity compared with existing models.
First, we propose a simplified but powerful GCN architecture which aggregates the neighborhood information using one flattened GCN layer.
Second, we propose an informative neighbor-infomax sampling method to select the most valuable neighbors by measuring the correlation among neighboring nodes.
Third, we propose a layer ensemble technique which improves the expressiveness of the learned representations by assembling the layer-wise neighborhood representations at the final layer.
- Score: 18.198536511983452
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Convolutional Networks (GCNs) and their variants have achieved
significant performances on various recommendation tasks. However, many
existing GCN models tend to perform recursive aggregations among all related
nodes, which can arise severe computational burden to hinder their application
to large-scale recommendation tasks. To this end, this paper proposes the
flattened GCN~(FlatGCN) model, which is able to achieve superior performance
with remarkably less complexity compared with existing models. Our main
contribution is three-fold. First, we propose a simplified but powerful GCN
architecture which aggregates the neighborhood information using one flattened
GCN layer, instead of recursively. The aggregation step in FlatGCN is
parameter-free such that it can be pre-computed with parallel computation to
save memory and computational cost. Second, we propose an informative
neighbor-infomax sampling method to select the most valuable neighbors by
measuring the correlation among neighboring nodes based on a principled metric.
Third, we propose a layer ensemble technique which improves the expressiveness
of the learned representations by assembling the layer-wise neighborhood
representations at the final layer. Extensive experiments on three datasets
verify that our proposed model outperforms existing GCN models considerably and
yields up to a few orders of magnitude speedup in training efficiency.
Related papers
- Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Neighbor Enhanced Graph Convolutional Networks for Node Classification
and Recommendation [30.179717374489414]
We theoretically analyze the affection of the neighbor quality over GCN models' performance.
We propose the Neighbor Enhanced Graph Convolutional Network (NEGCN) framework to boost the performance of existing GCN models.
arXiv Detail & Related papers (2022-03-30T06:54:28Z) - Non-Recursive Graph Convolutional Networks [33.459371861932574]
We propose a novel architecture named Non-Recursive Graph Convolutional Network (NRGCN) to improve both the training efficiency and the learning performance of GCNs.
NRGCN represents different hops of neighbors for each node based on inner-layer aggregation and layer-independent sampling.
In this way, each node can be directly represented by concatenating the information extracted independently from each hop of its neighbors.
arXiv Detail & Related papers (2021-05-09T08:12:18Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - MG-GCN: Fast and Effective Learning with Mix-grained Aggregators for
Training Large Graph Convolutional Networks [20.07942308916373]
Graph convolutional networks (GCNs) generate the embeddings of nodes by aggregating the information of their neighbors layer by layer.
The high computational and memory cost of GCNs makes it infeasible for training on large graphs.
A new model, named Mix-grained GCN (MG-GCN), achieves state-of-the-art performance in terms of accuracy, training speed, convergence speed, and memory cost.
arXiv Detail & Related papers (2020-11-17T14:51:57Z) - Mix Dimension in Poincar\'{e} Geometry for 3D Skeleton-based Action
Recognition [57.98278794950759]
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data.
We present a novel spatial-temporal GCN architecture which is defined via the Poincar'e geometry.
We evaluate our method on two current largest scale 3D datasets.
arXiv Detail & Related papers (2020-07-30T18:23:18Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Single-Layer Graph Convolutional Networks For Recommendation [17.3621098912528]
Graph Convolutional Networks (GCNs) have received significant attention and achieved start-of-the-art performances on recommendation tasks.
Existing GCN models tend to perform recursion aggregations among all related nodes, which arises severe computational burden.
We propose a single GCN layer to aggregate information from the neighbors filtered by DA similarity and then generates the node representations.
arXiv Detail & Related papers (2020-06-07T14:38:47Z) - A Robust Hierarchical Graph Convolutional Network Model for
Collaborative Filtering [0.0]
Graph Convolutional Network (GCN) has achieved great success and has been applied in various fields including recommender systems.
GCN still suffers from many issues such as training difficulties, over-smoothing, vulnerable to adversarial attacks, etc.
arXiv Detail & Related papers (2020-04-30T12:50:39Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.