Long Range Graph Benchmark
- URL: http://arxiv.org/abs/2206.08164v4
- Date: Tue, 28 Nov 2023 06:17:30 GMT
- Title: Long Range Graph Benchmark
- Authors: Vijay Prakash Dwivedi, Ladislav Ramp\'a\v{s}ek, Mikhail Galkin, Ali
Parviz, Guy Wolf, Anh Tuan Luu, Dominique Beaini
- Abstract summary: MP-GNNs that simply rely on 1-hop message passing often fare better in several existing graph benchmarks.
We benchmark both baseline GNNs and Graph Transformer networks to verify that the models which capture long-range dependencies perform significantly better on these tasks.
- Score: 32.317725340138104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) that are based on the message passing (MP)
paradigm generally exchange information between 1-hop neighbors to build node
representations at each layer. In principle, such networks are not able to
capture long-range interactions (LRI) that may be desired or necessary for
learning a given task on graphs. Recently, there has been an increasing
interest in development of Transformer-based methods for graphs that can
consider full node connectivity beyond the original sparse structure, thus
enabling the modeling of LRI. However, MP-GNNs that simply rely on 1-hop
message passing often fare better in several existing graph benchmarks when
combined with positional feature representations, among other innovations,
hence limiting the perceived utility and ranking of Transformer-like
architectures. Here, we present the Long Range Graph Benchmark (LRGB) with 5
graph learning datasets: PascalVOC-SP, COCO-SP, PCQM-Contact, Peptides-func and
Peptides-struct that arguably require LRI reasoning to achieve strong
performance in a given task. We benchmark both baseline GNNs and Graph
Transformer networks to verify that the models which capture long-range
dependencies perform significantly better on these tasks. Therefore, these
datasets are suitable for benchmarking and exploration of MP-GNNs and Graph
Transformer architectures that are intended to capture LRI.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - BLIS-Net: Classifying and Analyzing Signals on Graphs [20.345611294709244]
Graph neural networks (GNNs) have emerged as a powerful tool for tasks such as node classification and graph classification.
We introduce the BLIS-Net (Bi-Lipschitz Scattering Net), a novel GNN that builds on the previously introduced geometric scattering transform.
We show that BLIS-Net achieves superior performance on both synthetic and real-world data sets based on traffic flow and fMRI data.
arXiv Detail & Related papers (2023-10-26T17:03:14Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Representing Long-Range Context for Graph Neural Networks with Global
Attention [37.212747564546156]
We propose the use of Transformer-based self-attention to learn long-range pairwise relationships.
Our method, which we call GraphTrans, applies a permutation-invariant Transformer module after a standard GNN module.
Our results suggest that purely-learning-based approaches without graph structure may be suitable for learning high-level, long-range relationships on graphs.
arXiv Detail & Related papers (2022-01-21T18:16:21Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Hierarchical graph neural nets can capture long-range interactions [8.067880298298185]
We study hierarchical message passing models that leverage a multi-resolution representation of a given graph.
This facilitates learning of features that span large receptive fields without loss of local information.
We introduce Hierarchical Graph Net (HGNet), which for any two connected nodes guarantees existence of message-passing paths of at most logarithmic length.
arXiv Detail & Related papers (2021-07-15T16:24:22Z) - Analyzing the Performance of Graph Neural Networks with Pipe Parallelism [2.269587850533721]
We focus on Graph Neural Networks (GNNs) that have found great success in tasks such as node or edge classification and link prediction.
New approaches for processing larger networks are needed to advance graph techniques.
We study how GNNs could be parallelized using existing tools and frameworks that are known to be successful in the deep learning community.
arXiv Detail & Related papers (2020-12-20T04:20:38Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z) - Hierarchical Message-Passing Graph Neural Networks [12.207978823927386]
We propose a novel Hierarchical Message-passing Graph Neural Networks framework.
Key idea is generating a hierarchical structure that re-organises all nodes in a flat graph into multi-level super graphs.
We present the first model to implement this framework, termed Hierarchical Community-aware Graph Neural Network (HC-GNN)
arXiv Detail & Related papers (2020-09-08T13:11:07Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.