Topological Pooling on Graphs
- URL: http://arxiv.org/abs/2303.14543v1
- Date: Sat, 25 Mar 2023 19:30:46 GMT
- Title: Topological Pooling on Graphs
- Authors: Yuzhou Chen, Yulia R. Gel
- Abstract summary: Graph neural networks (GNNs) have demonstrated a significant success in various graph learning tasks.
We propose a novel topological pooling layer and witness complex-based topological embedding mechanism.
We show that Wit-TopoPool significantly outperforms all competitors across all datasets.
- Score: 24.584372324701885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have demonstrated a significant success in
various graph learning tasks, from graph classification to anomaly detection.
There recently has emerged a number of approaches adopting a graph pooling
operation within GNNs, with a goal to preserve graph attributive and structural
features during the graph representation learning. However, most existing graph
pooling operations suffer from the limitations of relying on node-wise neighbor
weighting and embedding, which leads to insufficient encoding of rich
topological structures and node attributes exhibited by real-world networks. By
invoking the machinery of persistent homology and the concept of landmarks, we
propose a novel topological pooling layer and witness complex-based topological
embedding mechanism that allow us to systematically integrate hidden
topological information at both local and global levels. Specifically, we
design new learnable local and global topological representations Wit-TopoPool
which allow us to simultaneously extract rich discriminative topological
information from graphs. Experiments on 11 diverse benchmark datasets against
18 baseline models in conjunction with graph classification tasks indicate that
Wit-TopoPool significantly outperforms all competitors across all datasets.
Related papers
- Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - When Witnesses Defend: A Witness Graph Topological Layer for Adversarial Graph Learning [19.566775406771757]
We bridge adversarial graph learning with the emerging tools from computational topology, namely, persistent homology representations of graphs.
We introduce the concept of witness complex to adversarial analysis on graphs, which allows us to focus only on the salient shape characteristics of graphs.
Our experiments across six datasets demonstrate that Witness Graph Topological Layer boosts the robustness of GNNs across a range of perturbations and against a range of adversarial attacks.
arXiv Detail & Related papers (2024-09-21T14:53:32Z) - Unveiling Global Interactive Patterns across Graphs: Towards Interpretable Graph Neural Networks [31.29616732552006]
Graph Neural Networks (GNNs) have emerged as a prominent framework for graph mining.
This paper proposes a novel intrinsically interpretable scheme for graph classification.
Global Interactive Pattern (GIP) learning introduces learnable global interactive patterns to explicitly interpret decisions.
arXiv Detail & Related papers (2024-07-02T06:31:13Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Topological Relational Learning on Graphs [2.4692806302088868]
Graph neural networks (GNNs) have emerged as a powerful tool for graph classification and representation learning.
We propose a novel topological relational inference (TRI) which allows for integrating higher-order graph information to GNNs.
We show that the new TRI-GNN outperforms all 14 state-of-the-art baselines on 6 out 7 graphs and exhibit higher robustness to perturbations.
arXiv Detail & Related papers (2021-10-29T04:03:27Z) - Hierarchical Graph Capsule Network [78.4325268572233]
We propose hierarchical graph capsule network (HGCN) that can jointly learn node embeddings and extract graph hierarchies.
To learn the hierarchical representation, HGCN characterizes the part-whole relationship between lower-level capsules (part) and higher-level capsules (whole)
arXiv Detail & Related papers (2020-12-16T04:13:26Z) - CommPOOL: An Interpretable Graph Pooling Framework for Hierarchical
Graph Representation Learning [74.90535111881358]
We propose a new interpretable graph pooling framework - CommPOOL.
It can capture and preserve the hierarchical community structure of graphs in the graph representation learning process.
CommPOOL is a general and flexible framework for hierarchical graph representation learning.
arXiv Detail & Related papers (2020-12-10T21:14:18Z) - Graph Neural Networks Including Sparse Interpretability [0.0]
We present a model-agnostic framework for interpreting important graph structure and node features.
Our GISST models achieve superior node feature and edge explanation precision in synthetic datasets.
arXiv Detail & Related papers (2020-06-30T21:35:55Z) - Graph Clustering with Graph Neural Networks [5.305362965553278]
Graph Neural Networks (GNNs) have achieved state-of-the-art results on many graph analysis tasks.
Unsupervised problems on graphs, such as graph clustering, have proved more resistant to advances in GNNs.
We introduce Deep Modularity Networks (DMoN), an unsupervised pooling method inspired by the modularity measure of clustering quality.
arXiv Detail & Related papers (2020-06-30T15:30:49Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.