Robust Ante-hoc Graph Explainer using Bilevel Optimization
- URL: http://arxiv.org/abs/2305.15745v2
- Date: Wed, 5 Jun 2024 00:56:47 GMT
- Title: Robust Ante-hoc Graph Explainer using Bilevel Optimization
- Authors: Kha-Dinh Luong, Mert Kosan, Arlei Lopes Da Silva, Ambuj Singh,
- Abstract summary: We propose RAGE, a novel and flexible ante-hoc explainer for graph neural networks.
RAGE can effectively identify molecular substructures that contain the full information needed for prediction.
Our experiments on various molecular classification tasks show that RAGE explanations are better than existing post-hoc and ante-hoc approaches.
- Score: 0.7999703756441758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explaining the decisions made by machine learning models for high-stakes applications is critical for increasing transparency and guiding improvements to these decisions. This is particularly true in the case of models for graphs, where decisions often depend on complex patterns combining rich structural and attribute data. While recent work has focused on designing so-called post-hoc explainers, the broader question of what constitutes a good explanation remains open. One intuitive property is that explanations should be sufficiently informative to reproduce the predictions given the data. In other words, a good explainer can be repurposed as a predictor. Post-hoc explainers do not achieve this goal as their explanations are highly dependent on fixed model parameters (e.g., learned GNN weights). To address this challenge, we propose RAGE (Robust Ante-hoc Graph Explainer), a novel and flexible ante-hoc explainer designed to discover explanations for graph neural networks using bilevel optimization, with a focus on the chemical domain. RAGE can effectively identify molecular substructures that contain the full information needed for prediction while enabling users to rank these explanations in terms of relevance. Our experiments on various molecular classification tasks show that RAGE explanations are better than existing post-hoc and ante-hoc approaches.
Related papers
- From Model Explanation to Data Misinterpretation: Uncovering the Pitfalls of Post Hoc Explainers in Business Research [3.7209396288545338]
We find a growing trend in business research where post hoc explanations are used to draw inferences about the data.
The ultimate goal of this paper is to caution business researchers against translating post hoc explanations of machine learning models into potentially false insights and understanding of data.
arXiv Detail & Related papers (2024-08-30T03:22:35Z) - Global Human-guided Counterfactual Explanations for Molecular Properties via Reinforcement Learning [49.095065258759895]
We develop a novel global explanation model RLHEX for molecular property prediction.
It aligns the counterfactual explanations with human-defined principles, making the explanations more interpretable and easy for experts to evaluate.
The global explanations produced by RLHEX cover 4.12% more input graphs and reduce the distance between the counterfactual explanation set and the input set by 0.47% on average across three molecular datasets.
arXiv Detail & Related papers (2024-06-19T22:16:40Z) - Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation [0.17842332554022688]
Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions.
We propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions.
In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm.
Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions.
arXiv Detail & Related papers (2022-10-20T08:44:57Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.