GRecX: An Efficient and Unified Benchmark for GNN-based Recommendation
- URL: http://arxiv.org/abs/2111.10342v1
- Date: Fri, 19 Nov 2021 17:45:46 GMT
- Title: GRecX: An Efficient and Unified Benchmark for GNN-based Recommendation
- Authors: Desheng Cai, Jun Hu, Shengsheng Qian, Quan Fang, Quan Zhao, Changsheng
Xu
- Abstract summary: We present GRecX, an open-source framework for benchmarking GNN-based recommendation models.
GRecX consists of core libraries for building GNN-based recommendation benchmarks, as well as the implementations of popular GNN-based recommendation models.
We conduct experiments with GRecX, and the experimental results show that GRecX allows us to train and benchmark GNN-based recommendation baselines in an efficient and unified way.
- Score: 55.55523188090938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present GRecX, an open-source TensorFlow framework for
benchmarking GNN-based recommendation models in an efficient and unified way.
GRecX consists of core libraries for building GNN-based recommendation
benchmarks, as well as the implementations of popular GNN-based recommendation
models. The core libraries provide essential components for building efficient
and unified benchmarks, including FastMetrics (efficient metrics computation
libraries), VectorSearch (efficient similarity search libraries for dense
vectors), BatchEval (efficient mini-batch evaluation libraries), and
DataManager (unified dataset management libraries). Especially, to provide a
unified benchmark for the fair comparison of different complex GNN-based
recommendation models, we design a new metric GRMF-X and integrate it into the
FastMetrics component. Based on a TensorFlow GNN library tf_geometric, GRecX
carefully implements a variety of popular GNN-based recommendation models. We
carefully implement these baseline models to reproduce the performance reported
in the literature, and our implementations are usually more efficient and
friendly. In conclusion, GRecX enables uses to train and benchmark GNN-based
recommendation baselines in an efficient and unified way. We conduct
experiments with GRecX, and the experimental results show that GRecX allows us
to train and benchmark GNN-based recommendation baselines in an efficient and
unified way. The source code of GRecX is available at
https://github.com/maenzhier/GRecX.
Related papers
- How Expressive are Graph Neural Networks in Recommendation? [17.31401354442106]
Graph Neural Networks (GNNs) have demonstrated superior performance on various graph learning tasks, including recommendation.
Recent research has explored the expressiveness of GNNs in general, demonstrating that message passing GNNs are at most as powerful as the Weisfeiler-Lehman test.
We propose the topological closeness metric to evaluate GNNs' ability to capture the structural distance between nodes.
arXiv Detail & Related papers (2023-08-22T02:17:34Z) - GRAN is superior to GraphRNN: node orderings, kernel- and graph
embeddings-based metrics for graph generators [0.6816499294108261]
We study kernel-based metrics on distributions of graph invariants and manifold-based and kernel-based metrics in graph embedding space.
We compare GraphRNN and GRAN, two well-known generative models for graphs, and unveil the influence of node orderings.
arXiv Detail & Related papers (2023-07-13T12:07:39Z) - Sheaf4Rec: Sheaf Neural Networks for Graph-based Recommender Systems [18.596875449579688]
We propose a cutting-edge model inspired by category theory: Sheaf4Rec.
Unlike single vector representations, Sheaf Neural Networks and their corresponding Laplacians represent each node (and edge) using a vector space.
Our proposed model exhibits a noteworthy relative improvement of up to 8.53% on F1-Score@10 and an impressive increase of up to 11.29% on NDCG@10.
arXiv Detail & Related papers (2023-04-07T07:03:54Z) - gSuite: A Flexible and Framework Independent Benchmark Suite for Graph
Neural Network Inference on GPUs [0.0]
We develop a benchmark suite that is framework independent, supporting versatile computational models, easily and can be used with architectural simulators without additional effort.
gSuite enables performing detailed performance characterization studies on GNN Inference using both contemporary GPU profilers and architectural GPU simulators.
We use several evaluation metrics to rigorously measure the performance of GNN computation.
arXiv Detail & Related papers (2022-10-20T21:18:51Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search [55.75621026447599]
We propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS.
Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures.
Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation.
arXiv Detail & Related papers (2022-06-18T10:17:15Z) - Graph4Rec: A Universal Toolkit with Graph Neural Networks for
Recommender Systems [5.030752995016985]
Graph4Rec is a universal toolkit that unifies the paradigm to train GNN models.
We conduct a systematic and comprehensive experiment to compare the performance of different GNN models.
arXiv Detail & Related papers (2021-12-02T07:56:13Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Non-Local Graph Neural Networks [60.28057802327858]
We propose a simple yet effective non-local aggregation framework with an efficient attention-guided sorting for GNNs.
We perform thorough experiments to analyze disassortative graph datasets and evaluate our non-local GNNs.
arXiv Detail & Related papers (2020-05-29T14:50:27Z) - Benchmarking Graph Neural Networks [75.42159546060509]
Graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
For any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress.
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the utility of the proposed open-source framework.
arXiv Detail & Related papers (2020-03-02T15:58:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.