Diversified Node Sampling based Hierarchical Transformer Pooling for
Graph Representation Learning
- URL: http://arxiv.org/abs/2310.20250v1
- Date: Tue, 31 Oct 2023 08:13:21 GMT
- Title: Diversified Node Sampling based Hierarchical Transformer Pooling for
Graph Representation Learning
- Authors: Gaichao Li, Jinsong Chen, John E. Hopcroft, Kun He
- Abstract summary: node dropping pooling aims at exploiting learnable scoring functions to drop nodes with comparatively lower significance scores.
Existing methods struggle to capture long-range dependencies since they mainly take GNNs as the backbones.
We propose a Graph Transformer Pooling method termed GTPool to efficiently capture long-range pairwise interactions.
- Score: 15.248591535696146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph pooling methods have been widely used on downsampling graphs, achieving
impressive results on multiple graph-level tasks like graph classification and
graph generation. An important line called node dropping pooling aims at
exploiting learnable scoring functions to drop nodes with comparatively lower
significance scores. However, existing node dropping methods suffer from two
limitations: (1) for each pooled node, these models struggle to capture
long-range dependencies since they mainly take GNNs as the backbones; (2)
pooling only the highest-scoring nodes tends to preserve similar nodes, thus
discarding the affluent information of low-scoring nodes. To address these
issues, we propose a Graph Transformer Pooling method termed GTPool, which
introduces Transformer to node dropping pooling to efficiently capture
long-range pairwise interactions and meanwhile sample nodes diversely.
Specifically, we design a scoring module based on the self-attention mechanism
that takes both global context and local context into consideration, measuring
the importance of nodes more comprehensively. GTPool further utilizes a
diversified sampling method named Roulette Wheel Sampling (RWS) that is able to
flexibly preserve nodes across different scoring intervals instead of only
higher scoring nodes. In this way, GTPool could effectively obtain long-range
information and select more representative nodes. Extensive experiments on 11
benchmark datasets demonstrate the superiority of GTPool over existing popular
graph pooling methods.
Related papers
- ENADPool: The Edge-Node Attention-based Differentiable Pooling for Graph Neural Networks [19.889547537748395]
Graph Neural Networks (GNNs) are powerful tools for graph classification.
One important operation for GNNs is the downsampling or pooling that can learn effective embeddings from the node representations.
We propose a new hierarchical pooling operation, namely the Edge-Node Attention-based Differentiable Pooling (ENADPool)
arXiv Detail & Related papers (2024-05-16T16:08:49Z) - Cluster-based Graph Collaborative Filtering [55.929052969825825]
Graph Convolution Networks (GCNs) have succeeded in learning user and item representations for recommendation systems.
Most existing GCN-based methods overlook the multiple interests of users while performing high-order graph convolution.
We propose a novel GCN-based recommendation model, termed Cluster-based Graph Collaborative Filtering (ClusterGCF)
arXiv Detail & Related papers (2024-04-16T07:05:16Z) - Careful Selection and Thoughtful Discarding: Graph Explicit Pooling
Utilizing Discarded Nodes [53.08068729187698]
We introduce a novel Graph Explicit Pooling (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification.
We conduct comprehensive experiments across 12 widely used datasets to validate our proposed method's effectiveness.
arXiv Detail & Related papers (2023-11-21T14:44:51Z) - On Exploring Node-feature and Graph-structure Diversities for Node Drop
Graph Pooling [86.65151066870739]
Current node drop pooling methods ignore the graph diversity in terms of the node features and the graph structures, thus resulting in suboptimal graph-level representations.
We propose a novel plug-and-play score scheme and refer to it as MID, which consists of a textbfMulti score space with two operations, textiti.e., fltextbfIpscore and textbfDropscore.
Specifically, the multidimensional score space depicts the significance of nodes through multiple criteria; the flipscore encourages the maintenance of dissimilar node
arXiv Detail & Related papers (2023-06-22T08:02:01Z) - Multi-Granularity Graph Pooling for Video-based Person Re-Identification [14.943835935921296]
graph neural networks (GNNs) are introduced to aggregate temporal and spatial features of video samples.
Existing graph-based models, like STGCN, perform the textitmean/textitmax pooling on node features to obtain the graph representation.
We propose the graph pooling network (GPNet) to learn the multi-granularity graph representation for the video retrieval.
arXiv Detail & Related papers (2022-09-23T13:26:05Z) - Accurate Learning of Graph Representations with Graph Multiset Pooling [45.72542969364438]
We propose a Graph Multiset Transformer (GMT) that captures the interaction between nodes according to their structural dependencies.
Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks.
arXiv Detail & Related papers (2021-02-23T07:45:58Z) - Second-Order Pooling for Graph Neural Networks [62.13156203025818]
We propose to use second-order pooling as graph pooling, which naturally solves the above challenges.
We show that direct use of second-order pooling with graph neural networks leads to practical problems.
We propose two novel global graph pooling methods based on second-order pooling; namely, bilinear mapping and attentional second-order pooling.
arXiv Detail & Related papers (2020-07-20T20:52:36Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.