Boosting Graph Pooling with Persistent Homology
- URL: http://arxiv.org/abs/2402.16346v3
- Date: Fri, 18 Oct 2024 08:09:14 GMT
- Title: Boosting Graph Pooling with Persistent Homology
- Authors: Chaolong Ying, Xinjian Zhao, Tianshu Yu,
- Abstract summary: naively plugging PH features into GNN layers always results in marginal improvement with low interpretability.
We investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner.
Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets.
- Score: 8.477383770884508
- License:
- Abstract: Recently, there has been an emerging trend to integrate persistent homology (PH) into graph neural networks (GNNs) to enrich expressive power. However, naively plugging PH features into GNN layers always results in marginal improvement with low interpretability. In this paper, we investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner. In this fashion, message passing in the coarsened graph acts along persistent pooled topology, leading to improved performance. Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets, demonstrating its wide applicability and flexibility.
Related papers
- PHLP: Sole Persistent Homology for Link Prediction -- Interpretable Feature Extraction [2.8413459430736396]
Link prediction (LP) is a significant research area in graph data.
Although graph neural network (GNN)-based models have achieved high performance in LP, understanding why they perform well is challenging.
We propose a novel method that employs PH for LP (PHLP) focusing on how the presence or absence of target links influences the overall topology.
arXiv Detail & Related papers (2024-04-23T16:54:56Z) - Revealing Decurve Flows for Generalized Graph Propagation [108.80758541147418]
This study addresses the limitations of the traditional analysis of message-passing, central to graph learning, by defining em textbfgeneralized propagation with directed and weighted graphs.
We include a preliminary exploration of learned propagation patterns in datasets, a first in the field.
arXiv Detail & Related papers (2024-02-13T14:13:17Z) - Going beyond persistent homology using persistent homology [5.724311218570011]
We introduce a novel concept of color-separating sets to provide a complete resolution to this important problem.
We propose RePHINE for learning topological features on graphs.
arXiv Detail & Related papers (2023-11-10T16:12:35Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Unifying over-smoothing and over-squashing in graph neural networks: A
physics informed approach and beyond [45.370565281567984]
Graph Neural Networks (GNNs) have emerged as one of the leading approaches for machine learning on graph-structured data.
critical computational challenges such as over-smoothing, over-squashing, and limited expressive power continue to impact the performance of GNNs.
We introduce the Multi-Scaled Heat Kernel based GNN (MHKG) by amalgamating diverse filtering functions' effects on node features.
arXiv Detail & Related papers (2023-09-06T06:22:18Z) - Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous
Graph Diffusion Functionals [7.6435511285856865]
Graph neural networks (GNNs) are widely used in domains like social networks and biological systems.
locality assumption of GNNs hampers their ability to capture long-range dependencies and global patterns in graphs.
We propose a new inductive bias based on variational analysis, drawing inspiration from the Brachchronistoe problem.
arXiv Detail & Related papers (2023-07-01T04:44:43Z) - The expressive power of pooling in Graph Neural Networks [6.5268245109828005]
We study how graph pooling affects the expressiveness of a Graph Neural Networks (GNNs)
We derive sufficient conditions for a pooling operator to fully preserve the expressive power of the MP layers before it.
We introduce an experimental setup to verify empirically the expressive power of a GNN equipped with pooling layers.
arXiv Detail & Related papers (2023-04-04T07:03:08Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Learnable Filters for Geometric Scattering Modules [64.03877398967282]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2022-08-15T22:30:07Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.