PyTorch Geometric High Order: A Unified Library for High Order Graph
Neural Network
- URL: http://arxiv.org/abs/2311.16670v1
- Date: Tue, 28 Nov 2023 10:34:48 GMT
- Title: PyTorch Geometric High Order: A Unified Library for High Order Graph
Neural Network
- Authors: Xiyuan Wang, Muhan Zhang
- Abstract summary: PyTorch Geometric High Order (PyGHO) is a library for High Order Graph Neural Networks (HOGNNs) that extends PyTorch (PyG)
We present a detailed in-depth of PyGHO and compare HOGNNs implemented with PyGHO with their official implementation on real-world tasks.
- Score: 32.537428858455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce PyTorch Geometric High Order (PyGHO), a library for High Order
Graph Neural Networks (HOGNNs) that extends PyTorch Geometric (PyG). Unlike
ordinary Message Passing Neural Networks (MPNNs) that exchange messages between
nodes, HOGNNs, encompassing subgraph GNNs and k-WL GNNs, encode node tuples, a
method previously lacking a standardized framework and often requiring complex
coding. PyGHO's main objective is to provide an unified and user-friendly
interface for various HOGNNs. It accomplishes this through streamlined data
structures for node tuples, comprehensive data processing utilities, and a
flexible suite of operators for high-order GNN methodologies. In this work, we
present a detailed in-depth of PyGHO and compare HOGNNs implemented with PyGHO
with their official implementation on real-world tasks. PyGHO achieves up to
$50\%$ acceleration and reduces the code needed for implementation by an order
of magnitude. Our library is available at
\url{https://github.com/GraphPKU/PygHO}.
Related papers
- Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Higher-order Sparse Convolutions in Graph Neural Networks [17.647346486710514]
We introduce a new higher-order sparse convolution based on the Sobolev norm of graph signals.
S-SobGNN shows competitive performance in all applications as compared to several state-of-the-art methods.
arXiv Detail & Related papers (2023-02-21T08:08:18Z) - Search to Capture Long-range Dependency with Stacking GNNs for Graph
Classification [41.84399177525008]
shallow GNNs are more common due to the well-known over-smoothing problem facing deeper GNNs.
We propose a novel approach with the help of neural architecture search (NAS), which is dubbed LRGNN (Long-Range Graph Neural Networks)
arXiv Detail & Related papers (2023-02-17T03:40:17Z) - NDGGNET-A Node Independent Gate based Graph Neural Networks [6.155450481110693]
For nodes with sparse connectivity, it is difficult to obtain enough information through a single GNN layer.
In this thesis, we define a novel framework that allows the normal GNN model to accommodate more layers.
Experimental results show that our proposed model can effectively increase the model depth and perform well on several datasets.
arXiv Detail & Related papers (2022-05-11T08:51:04Z) - PyTorch Geometric Signed Directed: A Software Package on Graph Neural
Networks for Signed and Directed Graphs [20.832917829426098]
PyTorch Geometric Signed Directed (PyGSD) is a software package for signed and directed networks.
PyGSD consists of easy-to-use GNN models, synthetic and real-world data, as well as task-specific evaluation metrics and loss functions.
As an extension library for PyG, our proposed software is maintained with open-source releases, detailed documentation, continuous integration, unit tests and code coverage checks.
arXiv Detail & Related papers (2022-02-22T10:25:59Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Training Graph Neural Networks with 1000 Layers [133.84813995275988]
We study reversible connections, group convolutions, weight tying, and equilibrium models to advance the memory and parameter efficiency of GNNs.
To the best of our knowledge, RevGNN-Deep is the deepest GNN in the literature by one order of magnitude.
arXiv Detail & Related papers (2021-06-14T15:03:00Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Graph Sequential Network for Reasoning over Sequences [38.766982479196926]
We consider a novel case where reasoning is needed over graphs built from sequences.
Existing GNN models fulfill this goal by first summarizing the node sequences into fixed-dimensional vectors, then applying GNN on these vectors.
We propose a new type of GNN called Graph Sequential Network (GSN), which features a new message passing algorithm based on co-attention between a node and each of its neighbors.
arXiv Detail & Related papers (2020-04-04T19:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.