Rethinking Node-wise Propagation for Large-scale Graph Learning
- URL: http://arxiv.org/abs/2402.06128v1
- Date: Fri, 9 Feb 2024 01:19:47 GMT
- Title: Rethinking Node-wise Propagation for Large-scale Graph Learning
- Authors: Xunkai Li, Jingyuan Ma, Zhengyu Wu, Daohan Su, Wentao Zhang, Rong-Hua
Li, Guoren Wang
- Abstract summary: textbfAdaptive textbfTopology-aware textbfPropagation (ATP)
ATP reduces potential high-bias propagation and extracts structural patterns of each node in a scalable manner to improve running efficiency and predictive performance.
ATP has proven to be efficient in improving the performance of prevalent scalable GNNs for semi-supervised node classification.
- Score: 42.29535580297932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scalable graph neural networks (GNNs) have emerged as a promising technique,
which exhibits superior predictive performance and high running efficiency
across numerous large-scale graph-based web applications. However, (i) Most
scalable GNNs tend to treat all nodes in graphs with the same propagation
rules, neglecting their topological uniqueness; (ii) Existing node-wise
propagation optimization strategies are insufficient on web-scale graphs with
intricate topology, where a full portrayal of nodes' local properties is
required. Intuitively, different nodes in web-scale graphs possess distinct
topological roles, and therefore propagating them indiscriminately or neglect
local contexts may compromise the quality of node representations. This
intricate topology in web-scale graphs cannot be matched by small-scale
scenarios. To address the above issues, we propose \textbf{A}daptive
\textbf{T}opology-aware \textbf{P}ropagation (ATP), which reduces potential
high-bias propagation and extracts structural patterns of each node in a
scalable manner to improve running efficiency and predictive performance.
Remarkably, ATP is crafted to be a plug-and-play node-wise propagation
optimization strategy, allowing for offline execution independent of the graph
learning process in a new perspective. Therefore, this approach can be
seamlessly integrated into most scalable GNNs while remain orthogonal to
existing node-wise propagation optimization strategies. Extensive experiments
on 12 datasets, including the most representative large-scale ogbn-papers100M,
have demonstrated the effectiveness of ATP. Specifically, ATP has proven to be
efficient in improving the performance of prevalent scalable GNNs for
semi-supervised node classification while addressing redundant computational
costs.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.