Geometry-Aware Edge Pooling for Graph Neural Networks
- URL: http://arxiv.org/abs/2506.11700v1
- Date: Fri, 13 Jun 2025 12:01:46 GMT
- Title: Geometry-Aware Edge Pooling for Graph Neural Networks
- Authors: Katharina Limbeck, Lydia Mezrag, Guy Wolf, Bastian Rieck,
- Abstract summary: Graph Neural Networks (GNNs) have shown significant success for graph-based tasks.<n>Motivated by the prevalence of large datasets in real-world applications, pooling layers are crucial components of GNNs.<n>We propose novel graph pooling layers for structure aware pooling via edge collapses.
- Score: 20.080879481223924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have shown significant success for graph-based tasks. Motivated by the prevalence of large datasets in real-world applications, pooling layers are crucial components of GNNs. By reducing the size of input graphs, pooling enables faster training and potentially better generalisation. However, existing pooling operations often optimise for the learning task at the expense of fundamental graph structures and interpretability. This leads to unreliable performance across varying dataset types, downstream tasks and pooling ratios. Addressing these concerns, we propose novel graph pooling layers for structure aware pooling via edge collapses. Our methods leverage diffusion geometry and iteratively reduce a graph's size while preserving both its metric structure and structural diversity. We guide pooling using magnitude, an isometry-invariant diversity measure, which permits us to control the fidelity of the pooling process. Further, we use the spread of a metric space as a faster and more stable alternative ensuring computational efficiency. Empirical results demonstrate that our methods (i) achieve superior performance compared to alternative pooling layers across a range of diverse graph classification tasks, (ii) preserve key spectral properties of the input graphs, and (iii) retain high accuracy across varying pooling ratios.
Related papers
- Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - A Graph Encoder-Decoder Network for Unsupervised Anomaly Detection [7.070726553564701]
We propose an unsupervised graph encoder-decoder model to detect abnormal nodes from graphs.
In the encoding stage, we design a novel pooling mechanism, named LCPool, to find a cluster assignment matrix.
In the decoding stage, we propose an unpooling operation, called LCUnpool, to reconstruct both the structure and nodal features of the original graph.
arXiv Detail & Related papers (2023-08-15T13:49:12Z) - Fast and Effective GNN Training through Sequences of Random Path Graphs [20.213843086649014]
We present GERN, a novel scalable framework for training GNNs in node classification tasks.<n>Our method progressively refines the GNN weights on a sequence of random spanning trees suitably transformed into path graphs.<n>The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Pooling with Maximum-Weight $k$-Independent Sets [12.251091325930837]
We introduce a graph coarsening mechanism based on the graph-theoretic concept of maximum-weight $k$-independent sets.
We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs.
arXiv Detail & Related papers (2022-08-06T14:12:47Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Structure-Aware Hierarchical Graph Pooling using Information Bottleneck [2.7088996845250897]
Graph pooling is an essential ingredient of Graph Neural Networks (GNNs) in graph classification and regression tasks.
We propose a novel pooling method named as HIBPool where we leverage the Information Bottleneck (IB) principle.
We also introduce a novel structure-aware Discriminative Pooling Readout (DiP-Readout) function to capture the informative local subgraph structures in the graph.
arXiv Detail & Related papers (2021-04-27T07:27:43Z) - Accurate Learning of Graph Representations with Graph Multiset Pooling [45.72542969364438]
We propose a Graph Multiset Transformer (GMT) that captures the interaction between nodes according to their structural dependencies.
Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks.
arXiv Detail & Related papers (2021-02-23T07:45:58Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - SimPool: Towards Topology Based Graph Pooling with Structural Similarity
Features [0.0]
This paper proposes two main contributions, the first is a differential module calculating structural similarity features based on the adjacency matrix.
The second main contribution is on integrating these features with a revisited pooling layer DiffPool arXiv:1806.08804 to propose a pooling layer referred to as SimPool.
Experimental results demonstrate that as part of an end-to-end Graph Neural Network architecture SimPool calculates node cluster assignments that resemble more to the locality.
arXiv Detail & Related papers (2020-06-03T12:51:57Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.