TopInG: Topologically Interpretable Graph Learning via Persistent Rationale Filtration
- URL: http://arxiv.org/abs/2510.05102v1
- Date: Mon, 06 Oct 2025 17:59:44 GMT
- Title: TopInG: Topologically Interpretable Graph Learning via Persistent Rationale Filtration
- Authors: Cheng Xin, Fan Xu, Xin Ding, Jie Gao, Jiaxin Ding,
- Abstract summary: We propose TopInG: Topologically Interpretable Graph Learning, a novel framework to identify persistent rationale subgraphs.<n>TopInG employs a rationale filtration learning approach to model an autoregressive generation process of rationale subgraphs.<n>Our approach improves upon state-of-the-art methods on both predictive accuracy and interpretation quality.
- Score: 10.830399323047265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have shown remarkable success across various scientific fields, yet their adoption in critical decision-making is often hindered by a lack of interpretability. Recently, intrinsically interpretable GNNs have been studied to provide insights into model predictions by identifying rationale substructures in graphs. However, existing methods face challenges when the underlying rationale subgraphs are complex and varied. In this work, we propose TopInG: Topologically Interpretable Graph Learning, a novel topological framework that leverages persistent homology to identify persistent rationale subgraphs. TopInG employs a rationale filtration learning approach to model an autoregressive generation process of rationale subgraphs, and introduces a self-adjusted topological constraint, termed topological discrepancy, to enforce a persistent topological distinction between rationale subgraphs and irrelevant counterparts. We provide theoretical guarantees that our loss function is uniquely optimized by the ground truth under specific conditions. Extensive experiments demonstrate TopInG's effectiveness in tackling key challenges, such as handling variform rationale subgraphs, balancing predictive performance with interpretability, and mitigating spurious correlations. Results show that our approach improves upon state-of-the-art methods on both predictive accuracy and interpretation quality.
Related papers
- The Shape of Reasoning: Topological Analysis of Reasoning Traces in Large Language Models [2.846561253333858]
We introduce a topological data analysis framework that captures the geometry of reasoning traces and enables label-efficient assessment.<n>We show that a compact, stable set of topological features reliably indicates trace quality, offering a practical signal for future reinforcement learning algorithms.
arXiv Detail & Related papers (2025-10-23T15:43:43Z) - Two Birds with One Stone: Enhancing Uncertainty Quantification and Interpretability with Graph Functional Neural Process [27.760002432327962]
Graph neural networks (GNNs) are powerful tools on graph data.<n>However, their predictions are mis-calibrated and lack interpretability.<n>We propose a new uncertainty-aware and interpretable graph classification model.
arXiv Detail & Related papers (2025-08-23T17:48:05Z) - Out-of-Distribution Detection on Graphs: A Survey [58.47395497985277]
Graph out-of-distribution (GOOD) detection focuses on identifying graph data that deviates from the distribution seen during training.<n>We categorize existing methods into four types: enhancement-based, reconstruction-based, information propagation-based, and classification-based approaches.<n>We discuss practical applications and theoretical foundations, highlighting the unique challenges posed by graph data.
arXiv Detail & Related papers (2025-02-12T04:07:12Z) - Perturbation Ontology based Graph Attention Networks [26.95077612390953]
Ontology-based Graph Attention Networks (POGAT) is a novel methodology that combines ontology subgraphs with an advanced self-supervised learning paradigm to achieve a deep contextual understanding.<n>POGAT significantly outperforms state-of-the-art baselines, achieving a groundbreaking improvement of up to 10.78% in F1-score for the critical task of link prediction and 12.01% in Micro-F1 for the critical task of node classification.
arXiv Detail & Related papers (2024-11-27T17:12:14Z) - Topograph: An efficient Graph-Based Framework for Strictly Topology Preserving Image Segmentation [78.54656076915565]
Topological correctness plays a critical role in many image segmentation tasks.<n>Most networks are trained using pixel-wise loss functions, such as Dice, neglecting topological accuracy.<n>We propose a novel, graph-based framework for topologically accurate image segmentation.
arXiv Detail & Related papers (2024-11-05T16:20:14Z) - xAI-Drop: Don't Use What You Cannot Explain [23.33477769275026]
Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data.
GNNs face challenges such as lack of generalization and poor interpretability.
We introduce xAI-Drop, a novel topological-level dropping regularizer.
arXiv Detail & Related papers (2024-07-29T14:53:45Z) - Fine-grained Graph Rationalization [51.293401030058085]
We propose fine-grained graph rationalization (FIG) for graph machine learning.<n>Our idea is driven by the self-attention mechanism, which provides rich interactions between input nodes.<n>Our experiments involve 7 real-world datasets, and the proposed FIG shows significant performance advantages compared to 13 baseline methods.
arXiv Detail & Related papers (2023-12-13T02:56:26Z) - Weisfeiler and Lehman Go Paths: Learning Topological Features via Path Complexes [4.23480641508611]
Graph Neural Networks (GNNs) are theoretically bounded by the 1-Weisfeiler-Lehman test.
Our study presents a novel perspective by focusing on simple paths within graphs during the topological message-passing process.
arXiv Detail & Related papers (2023-08-13T19:45:20Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.