Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows
- URL: http://arxiv.org/abs/2112.09895v1
- Date: Sat, 18 Dec 2021 10:19:01 GMT
- Title: Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows
- Authors: Junchi Yu, Tingyang Xu, Ran He
- Abstract summary: Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
- Score: 67.23405590815602
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As Graph Neural Networks (GNNs) are widely adopted in digital pathology,
there is increasing attention to developing explanation models (explainers) of
GNNs for improved transparency in clinical decisions.
Existing explainers discover an explanatory subgraph relevant to the
prediction.
However, such a subgraph is insufficient to reveal all the critical
biological substructures for the prediction because the prediction will remain
unchanged after removing that subgraph.
Hence, an explanatory subgraph should be not only necessary for prediction,
but also sufficient to uncover the most predictive regions for the explanation.
Such explanation requires a measurement of information transferred from
different input subgraphs to the predictive output, which we define as
information flow.
In this work, we address these key challenges and propose IFEXPLAINER, which
generates a necessary and sufficient explanation for GNNs.
To evaluate the information flow within GNN's prediction, we first propose a
novel notion of predictiveness, named $f$-information, which is directional and
incorporates the realistic capacity of the GNN model.
Based on it, IFEXPLAINER generates the explanatory subgraph with maximal
information flow to the prediction.
Meanwhile, it minimizes the information flow from the input to the predictive
result after removing the explanation.
Thus, the produced explanation is necessarily important to the prediction and
sufficient to reveal the most crucial substructures.
We evaluate IFEXPLAINER to interpret GNN's predictions on breast cancer
subtyping.
Experimental results on the BRACS dataset show the superior performance of
the proposed method.
Related papers
- Towards Few-shot Self-explaining Graph Neural Networks [16.085176689122036]
We propose a novel framework that generates explanations to support predictions in few-shot settings.
MSE-GNN adopts a two-stage self-explaining structure, consisting of an explainer and a predictor.
We show that MSE-GNN can achieve superior performance on prediction tasks while generating high-quality explanations.
arXiv Detail & Related papers (2024-08-14T07:31:11Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - Towards Modeling Uncertainties of Self-explaining Neural Networks via
Conformal Prediction [34.87646720253128]
We propose a novel uncertainty modeling framework for self-explaining neural networks.
We show it provides strong distribution-free uncertainty modeling performance for the generated explanations.
It also excels in producing efficient and effective prediction sets for the final predictions.
arXiv Detail & Related papers (2024-01-03T05:51:49Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - A Survey of Explainable Graph Neural Networks: Taxonomy and Evaluation
Metrics [8.795591344648294]
We focus on explainable graph neural networks and categorize them based on the use of explainable methods.
We provide the common performance metrics for GNNs explanations and point out several future research directions.
arXiv Detail & Related papers (2022-07-26T01:45:54Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.