NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification
- URL: http://arxiv.org/abs/2306.08385v1
- Date: Wed, 14 Jun 2023 09:21:15 GMT
- Title: NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification
- Authors: Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, Junchi Yan
- Abstract summary: We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
- Score: 70.51126383984555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks have been extensively studied for learning with
inter-connected data. Despite this, recent evidence has revealed GNNs'
deficiencies related to over-squashing, heterophily, handling long-range
dependencies, edge incompleteness and particularly, the absence of graphs
altogether. While a plausible solution is to learn new adaptive topology for
message passing, issues concerning quadratic complexity hinder simultaneous
guarantees for scalability and precision in large networks. In this paper, we
introduce a novel all-pair message passing scheme for efficiently propagating
node signals between arbitrary nodes, as an important building block for a
pioneering Transformer-style network for node classification on large graphs,
dubbed as \textsc{NodeFormer}. Specifically, the efficient computation is
enabled by a kernerlized Gumbel-Softmax operator that reduces the algorithmic
complexity to linearity w.r.t. node numbers for learning latent graph
structures from large, potentially fully-connected graphs in a differentiable
manner. We also provide accompanying theory as justification for our design.
Extensive experiments demonstrate the promising efficacy of the method in
various tasks including node classification on graphs (with up to 2M nodes) and
graph-enhanced applications (e.g., image classification) where input graphs are
missing.
Related papers
- Learning to Approximate Adaptive Kernel Convolution on Graphs [4.434835769977399]
We propose a diffusion learning framework, where the range of feature aggregation is controlled by the scale of a diffusion kernel.
Our model is tested on various standard for node-wise classification for the state-of-the-art datasets performance.
It is also validated on a real-world brain network data for graph classifications to demonstrate its practicality for Alzheimer classification.
arXiv Detail & Related papers (2024-01-22T10:57:11Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Addressing Heterophily in Node Classification with Graph Echo State
Networks [11.52174067809364]
We address the challenges of heterophilic graphs with Graph Echo State Network (GESN) for node classification.
GESN is a reservoir computing model for graphs, where node embeddings are computed by an untrained message-passing function.
Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to most fully trained deep models.
arXiv Detail & Related papers (2023-05-14T19:42:31Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Permutation-Invariant Variational Autoencoder for Graph-Level
Representation Learning [0.0]
We propose a permutation-invariant variational autoencoder for graph structured data.
Our model indirectly learns to match the node ordering of input and output graph, without imposing a particular node ordering.
We demonstrate the effectiveness of our proposed model on various graph reconstruction and generation tasks.
arXiv Detail & Related papers (2021-04-20T09:44:41Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.