Robustness questions the interpretability of graph neural networks: what to do?
- URL: http://arxiv.org/abs/2505.02566v1
- Date: Mon, 05 May 2025 11:14:56 GMT
- Title: Robustness questions the interpretability of graph neural networks: what to do?
- Authors: Kirill Lukyanov, Georgii Sazonov, Serafim Boyarsky, Ilya Makarov,
- Abstract summary: Graph Neural Networks (GNNs) have become a cornerstone in graph-based data analysis.<n>This paper presents a benchmark to systematically analyze the impact of various factors on the interpretability of GNNs.<n>We evaluate six GNN architectures based on GCN, SAGE, GIN, and GAT across five datasets from two distinct domains.
- Score: 0.10713888959520207
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) have become a cornerstone in graph-based data analysis, with applications in diverse domains such as bioinformatics, social networks, and recommendation systems. However, the interplay between model interpretability and robustness remains poorly understood, especially under adversarial scenarios like poisoning and evasion attacks. This paper presents a comprehensive benchmark to systematically analyze the impact of various factors on the interpretability of GNNs, including the influence of robustness-enhancing defense mechanisms. We evaluate six GNN architectures based on GCN, SAGE, GIN, and GAT across five datasets from two distinct domains, employing four interpretability metrics: Fidelity, Stability, Consistency, and Sparsity. Our study examines how defenses against poisoning and evasion attacks, applied before and during model training, affect interpretability and highlights critical trade-offs between robustness and interpretability. The framework will be published as open source. The results reveal significant variations in interpretability depending on the chosen defense methods and model architecture characteristics. By establishing a standardized benchmark, this work provides a foundation for developing GNNs that are both robust to adversarial threats and interpretable, facilitating trust in their deployment in sensitive applications.
Related papers
- Pruning Graphs by Adversarial Robustness Evaluation to Strengthen GNN Defenses [1.066048003460524]
Graph Neural Networks (GNNs) have emerged as a dominant paradigm for learning on graph-structured data.<n>In this work, we introduce a pruning framework that leverages adversarial robustness evaluation to explicitly identify and remove detrimental components of the graph.<n>By using robustness scores as guidance, our method selectively prunes edges that are most likely to degrade model reliability, thereby yielding cleaner and more resilient graph representations.
arXiv Detail & Related papers (2025-11-29T20:15:54Z) - Robustness in Text-Attributed Graph Learning: Insights, Trade-offs, and New Defenses [34.0252107920933]
We introduce a unified and comprehensive framework to evaluate robustness in TAG learning.<n>Our framework evaluates classical GNNs, robust GNNs (RGNNs), and GraphLLMs across ten datasets from four domains.<n>Our work establishes a foundation for future research on TAG security and offers practical solutions for robust TAG learning in adversarial environments.
arXiv Detail & Related papers (2025-10-20T05:57:54Z) - Uncertainty-Aware Graph Neural Networks: A Multi-Hop Evidence Fusion Approach [55.43914153271912]
Graph neural networks (GNNs) excel in graph representation learning by integrating graph structure and node features.<n>Existing GNNs fail to account for the uncertainty of class probabilities that vary with the depth of the model, leading to unreliable and risky predictions in real-world scenarios.<n>We propose a novel Evidence Fusing Graph Neural Network (EFGNN for short) to achieve trustworthy prediction, enhance node classification accuracy, and make explicit the risk of wrong predictions.
arXiv Detail & Related papers (2025-06-16T03:59:38Z) - On the Stability of Graph Convolutional Neural Networks: A Probabilistic Perspective [24.98112303106984]
We study how perturbations in the graph topology affect GCNN outputs and propose a novel formulation for analyzing model stability.<n>Unlike prior studies that focus only on worst-case perturbations, our distribution-aware formulation characterizes output perturbations across a broad range of input data.
arXiv Detail & Related papers (2025-06-01T23:17:19Z) - Hierarchical Uncertainty-Aware Graph Neural Network [3.4498722449655066]
This work introduces a novel architecture, the Hierarchical Uncertainty-Aware Graph Neural Network (HU-GNN)<n>It unifies multi-scale representation learning, principled uncertainty estimation, and self-supervised embedding diversity within a single end-to-end framework.<n>Specifically, HU-GNN adaptively forms node clusters and estimates uncertainty at multiple structural scales from individual nodes to higher levels.
arXiv Detail & Related papers (2025-04-28T14:22:18Z) - On the Relationship Between Robustness and Expressivity of Graph Neural Networks [7.161966906570077]
Graph Neural Networks (GNNs) are vulnerable to bit-flip attacks (BFAs)<n>We introduce an analytical framework to study the influence of architectural features, graph properties, and their interaction.<n>We derive theoretical bounds for the number of bit flips required to degrade GNN expressivity on a dataset.
arXiv Detail & Related papers (2025-04-18T16:38:33Z) - Neural Networks Decoded: Targeted and Robust Analysis of Neural Network Decisions via Causal Explanations and Reasoning [9.947555560412397]
We introduce TRACER, a novel method grounded in causal inference theory to estimate the causal dynamics underpinning DNN decisions.
Our approach systematically intervenes on input features to observe how specific changes propagate through the network, affecting internal activations and final outputs.
TRACER further enhances explainability by generating counterfactuals that reveal possible model biases and offer contrastive explanations for misclassifications.
arXiv Detail & Related papers (2024-10-07T20:44:53Z) - Understanding the Robustness of Graph Neural Networks against Adversarial Attacks [14.89001880258583]
Recent studies have shown that graph neural networks (GNNs) are vulnerable to adversarial attacks.<n>This vulnerability has spurred a growing focus on designing robust GNNs.<n>We conduct the first large-scale systematic study on the adversarial robustness of GNNs.
arXiv Detail & Related papers (2024-06-20T01:24:18Z) - Uncertainty in Graph Neural Networks: A Survey [47.785948021510535]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.<n>However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.<n>This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.