Graph4Rec: A Universal Toolkit with Graph Neural Networks for
Recommender Systems
- URL: http://arxiv.org/abs/2112.01035v1
- Date: Thu, 2 Dec 2021 07:56:13 GMT
- Title: Graph4Rec: A Universal Toolkit with Graph Neural Networks for
Recommender Systems
- Authors: Weibin Li, Mingkai He, Zhengjie Huang, Xianming Wang, Shikun Feng,
Weiyue Su, Yu Sun
- Abstract summary: Graph4Rec is a universal toolkit that unifies the paradigm to train GNN models.
We conduct a systematic and comprehensive experiment to compare the performance of different GNN models.
- Score: 5.030752995016985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, owing to the outstanding performance in graph representation
learning, graph neural network (GNN) techniques have gained considerable
interests in many real-world scenarios, such as recommender systems and social
networks. In recommender systems, the main challenge is to learn the effective
user/item representations from their interactions. However, many recent
publications using GNNs for recommender systems cannot be directly compared,
due to their difference on datasets and evaluation metrics. Furthermore, many
of them only provide a demo to conduct experiments on small datasets, which is
far away to be applied in real-world recommender systems. To address this
problem, we introduce Graph4Rec, a universal toolkit that unifies the paradigm
to train GNN models into the following parts: graphs input, random walk
generation, ego graphs generation, pairs generation and GNNs selection. From
this training pipeline, one can easily establish his own GNN model with a few
configurations. Besides, we develop a large-scale graph engine and a parameter
server to support distributed GNN training. We conduct a systematic and
comprehensive experiment to compare the performance of different GNN models on
several scenarios in different scale. Extensive experiments are demonstrated to
identify the key components of GNNs. We also try to figure out how the sparse
and dense parameters affect the performance of GNNs. Finally, we investigate
methods including negative sampling, ego graph construction order, and warm
start strategy to find a more effective and efficient GNNs practice on
recommender systems. Our toolkit is based on PGL
https://github.com/PaddlePaddle/PGL and the code is opened source in
https://github.com/PaddlePaddle/PGL/tree/main/apps/Graph4Rec.
Related papers
- Graph Ladling: Shockingly Simple Parallel GNN Training without
Intermediate Communication [100.51884192970499]
GNNs are a powerful family of neural networks for learning over graphs.
scaling GNNs either by deepening or widening suffers from prevalent issues of unhealthy gradients, over-smoothening, information squashing.
We propose not to deepen or widen current GNNs, but instead present a data-centric perspective of model soups tailored for GNNs.
arXiv Detail & Related papers (2023-06-18T03:33:46Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Attention-Based Recommendation On Graphs [9.558392439655012]
Graph Neural Networks (GNN) have shown remarkable performance in different tasks.
In this study, we propose GARec as a model-based recommender system.
The presented method outperforms existing model-based, non-graph neural networks and graph neural networks in different MovieLens datasets.
arXiv Detail & Related papers (2022-01-04T21:02:02Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Boost then Convolve: Gradient Boosting Meets Graph Neural Networks [6.888700669980625]
We show that gradient boosted decision trees (GBDT) often outperform other machine learning methods when faced with heterogeneous data.
We propose a novel architecture that trains GBDT and GNN jointly to get the best of both worlds.
Our model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN.
arXiv Detail & Related papers (2021-01-21T10:46:41Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Graph Random Neural Network for Semi-Supervised Learning on Graphs [36.218650686748546]
We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored.
Most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce.
In this paper, we propose a simple yet effective framework -- GRAPH R NEURAL NETWORKS (GRAND) -- to address these issues.
arXiv Detail & Related papers (2020-05-22T09:40:13Z) - Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs [20.197085398581397]
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks.
We propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models.
SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.
arXiv Detail & Related papers (2020-02-18T12:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.