PolyFormer: Scalable Node-wise Filters via Polynomial Graph Transformer
- URL: http://arxiv.org/abs/2407.14459v1
- Date: Fri, 19 Jul 2024 17:01:41 GMT
- Title: PolyFormer: Scalable Node-wise Filters via Polynomial Graph Transformer
- Authors: Jiahong Ma, Mingguo He, Zhewei Wei,
- Abstract summary: In this work, we propose a scalable node-wise filter, PolyAttn. Leveraging the attention mechanism, PolyAttn can directly learn node-wise filters in an efficient manner.
Our proposed methods excel at learning arbitrary node-wise filters, showing superior performance on both homophilic and heterophilic graphs.
- Score: 22.287960384629585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spectral Graph Neural Networks have demonstrated superior performance in graph representation learning. However, many current methods focus on employing shared polynomial coefficients for all nodes, i.e., learning node-unified filters, which limits the filters' flexibility for node-level tasks. The recent DSF attempts to overcome this limitation by learning node-wise coefficients based on positional encoding. However, the initialization and updating process of the positional encoding are burdensome, hindering scalability on large-scale graphs. In this work, we propose a scalable node-wise filter, PolyAttn. Leveraging the attention mechanism, PolyAttn can directly learn node-wise filters in an efficient manner, offering powerful representation capabilities. Building on PolyAttn, we introduce the whole model, named PolyFormer. In the lens of Graph Transformer models, PolyFormer, which calculates attention scores within nodes, shows great scalability. Moreover, the model captures spectral information, enhancing expressiveness while maintaining efficiency. With these advantages, PolyFormer offers a desirable balance between scalability and expressiveness for node-level tasks. Extensive experiments demonstrate that our proposed methods excel at learning arbitrary node-wise filters, showing superior performance on both homophilic and heterophilic graphs, and handling graphs containing up to 100 million nodes. The code is available at https://github.com/air029/PolyFormer.
Related papers
- Elevating Spectral GNNs through Enhanced Band-pass Filter Approximation [26.79625547648669]
We first show that poly-GNN with a better approximation for band-pass graph filters performs better on graph learning tasks.
This insight sheds light on critical issues of existing poly-GNNs, i.e., those poly-GNNs achieve trivial performance in approximating band-pass graph filters.
To tackle the issues, we propose a novel poly-GNN named TrigoNet, which achieves leading performance in approximating bandpass graph filters.
arXiv Detail & Related papers (2024-04-15T11:35:32Z) - Careful Selection and Thoughtful Discarding: Graph Explicit Pooling
Utilizing Discarded Nodes [53.08068729187698]
We introduce a novel Graph Explicit Pooling (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification.
We conduct comprehensive experiments across 12 widely used datasets to validate our proposed method's effectiveness.
arXiv Detail & Related papers (2023-11-21T14:44:51Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - A Piece-wise Polynomial Filtering Approach for Graph Neural Networks [0.45298395481707365]
Graph Neural Networks (GNNs) exploit signals from node features and the input graph topology to improve node classification task performance.
These models tend to perform poorly on heterophilic graphs, where connected nodes have different labels.
We show that our model achieves performance gains of up to 5% over the state-of-theart models and outperforms existing filter-based approaches in general.
arXiv Detail & Related papers (2021-12-07T05:16:53Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Complete the Missing Half: Augmenting Aggregation Filtering with
Diversification for Graph Convolutional Networks [46.14626839260314]
We show that current Graph Neural Networks (GNNs) are potentially a problematic factor underlying all GNN methods for learning on certain datasets.
We augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity.
Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations.
In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
arXiv Detail & Related papers (2020-08-20T08:45:16Z) - Permutohedral-GCN: Graph Convolutional Networks with Global Attention [7.873191259466302]
Graph convolutional networks (GCNs) update a node's feature vector by aggregating features from its neighbors in the graph.
We introduce a global attention mechanism where a node can selectively attend to, and aggregate features from, any other node in the graph.
We show that the resulting attention-based global aggregation scheme is analogous to high-dimensional Gaussian filtering.
arXiv Detail & Related papers (2020-03-02T02:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.