MetaGMT: Improving Actionable Interpretability of Graph Multilinear Networks via Meta-Learning Filtration
- URL: http://arxiv.org/abs/2505.19445v1
- Date: Mon, 26 May 2025 03:07:58 GMT
- Title: MetaGMT: Improving Actionable Interpretability of Graph Multilinear Networks via Meta-Learning Filtration
- Authors: Rishabh Bhattacharya, Hari Shankar, Vaishnavi Shivkumar, Ponnurangam Kumaraguru,
- Abstract summary: We present MetaGMT, a meta-learning framework that enhances explanation fidelity through a novel bi-level optimization approach.<n>We demonstrate that MetaGMT significantly improves both explanation quality (AUC-ROC, Precision@K) and robustness to spurious patterns.<n>Our work contributes to building more trustworthy and actionable GNN systems for real-world applications.
- Score: 6.102559098873098
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The growing adoption of Graph Neural Networks (GNNs) in high-stakes domains like healthcare and finance demands reliable explanations of their decision-making processes. While inherently interpretable GNN architectures like Graph Multi-linear Networks (GMT) have emerged, they remain vulnerable to generating explanations based on spurious correlations, potentially undermining trust in critical applications. We present MetaGMT, a meta-learning framework that enhances explanation fidelity through a novel bi-level optimization approach. We demonstrate that MetaGMT significantly improves both explanation quality (AUC-ROC, Precision@K) and robustness to spurious patterns, across BA-2Motifs, MUTAG, and SP-Motif benchmarks. Our approach maintains competitive classification accuracy while producing more faithful explanations (with an increase up to 8% of Explanation ROC on SP-Motif 0.5) compared to baseline methods. These advancements in interpretability could enable safer deployment of GNNs in sensitive domains by (1) facilitating model debugging through more reliable explanations, (2) supporting targeted retraining when biases are identified, and (3) enabling meaningful human oversight. By addressing the critical challenge of explanation reliability, our work contributes to building more trustworthy and actionable GNN systems for real-world applications.
Related papers
- Do We Really Need GNNs with Explicit Structural Modeling? MLPs Suffice for Language Model Representations [50.45261187796993]
Graph Neural Networks (GNNs) fail to fully utilize structural information, whereas Multi-Layer Perceptrons (MLPs) exhibit a surprising ability in structure-aware tasks.<n>This paper introduces a comprehensive probing framework from an information-theoretic perspective.
arXiv Detail & Related papers (2025-06-26T18:10:28Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - KARE-RAG: Knowledge-Aware Refinement and Enhancement for RAG [63.82127103851471]
Retrieval-Augmented Generation (RAG) enables large language models to access broader knowledge sources.<n>We demonstrate that enhancing generative models' capacity to process noisy content is equally critical for robust performance.<n>We present KARE-RAG, which improves knowledge utilization through three key innovations.
arXiv Detail & Related papers (2025-06-03T06:31:17Z) - How to Make LLMs Strong Node Classifiers? [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs)<n>We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - Massive Activations in Graph Neural Networks: Decoding Attention for Domain-Dependent Interpretability [0.9499648210774584]
We show the emergence of Massive Activations (MAs) within attention layers in edge-featured Graph Neural Networks (GNNs)<n>Our study assesses various edge-featured attention-based GNN models using benchmark datasets, including ZINC, TOX21, and PROTEINS.
arXiv Detail & Related papers (2024-09-05T12:19:07Z) - xAI-Drop: Don't Use What You Cannot Explain [23.33477769275026]
Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data.
GNNs face challenges such as lack of generalization and poor interpretability.
We introduce xAI-Drop, a novel topological-level dropping regularizer.
arXiv Detail & Related papers (2024-07-29T14:53:45Z) - Kolmogorov-Arnold Graph Neural Networks [2.4005219869876453]
Graph neural networks (GNNs) excel in learning from network-like data but often lack interpretability.
We propose the Graph Kolmogorov-Arnold Network (GKAN) to enhance both accuracy and interpretability.
arXiv Detail & Related papers (2024-06-26T13:54:59Z) - Uncertainty in Graph Neural Networks: A Survey [47.785948021510535]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.<n>However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.<n>This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network [49.91919718254597]
Large Language Models (LLMs) exhibit strong In-Context Learning capabilities when prompts with demonstrations are used.
Prompt-based fine-tuning proves to be an effective fine-tuning method in low-data scenarios, but high demands on computing resources limit its practicality.
GNNavi employs a Graph Neural Network layer to precisely guide the aggregation and distribution of information flow during the processing of prompts.
arXiv Detail & Related papers (2024-02-18T21:13:05Z) - Quantifying the Optimization and Generalization Advantages of Graph Neural Networks Over Multilayer Perceptrons [50.33260238739837]
Graph networks (GNNs) have demonstrated remarkable capabilities in learning from graph-structured data.<n>There remains a lack of analysis comparing GNNs and generalizations from an optimization and generalization perspective.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Interpreting GNN-based IDS Detections Using Provenance Graph Structural Features [15.256262257064982]
We introduce PROVEXPLAINER, a framework offering instance-level security-aware explanations using an interpretable surrogate model.<n>On malware and APT datasets, PROVEXPLAINER achieves up to 29%/27%/25% higher fidelity+, precision and recall, and 12% lower fidelity- respectively.
arXiv Detail & Related papers (2023-06-01T17:36:24Z) - Task-Agnostic Graph Neural Network Evaluation via Adversarial
Collaboration [11.709808788756966]
GraphAC is a principled, task-agnostic, and stable framework for evaluating Graph Neural Network (GNN) research for molecular representation learning.
We introduce a novel objective function: the Competitive Barlow Twins, that allow two GNNs to jointly update themselves from direct competitions against each other.
arXiv Detail & Related papers (2023-01-27T03:33:11Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Interpretable Learning-to-Rank with Generalized Additive Models [78.42800966500374]
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models.
We lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks.
arXiv Detail & Related papers (2020-05-06T01:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.