KS-GNNExplainer: Global Model Interpretation Through Instance
Explanations On Histopathology images
- URL: http://arxiv.org/abs/2304.08240v1
- Date: Fri, 14 Apr 2023 16:48:41 GMT
- Title: KS-GNNExplainer: Global Model Interpretation Through Instance
Explanations On Histopathology images
- Authors: Sina Abdous, Reza Abdollahzadeh, Mohammad Hossein Rohban
- Abstract summary: We develop KS-GNNExplainer, the first instance-level graph neural network explainer.
Our experiments on various datasets, and based on both quantitative and qualitative measures, demonstrate that the proposed explainer is capable of being a global pattern extractor.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instance-level graph neural network explainers have proven beneficial for
explaining such networks on histopathology images. However, there has been few
methods that provide model explanations, which are common patterns among
samples within the same class. We envision that graph-based histopathological
image analysis can benefit significantly from such explanations. On the other
hand, current model-level explainers are based on graph generation methods that
are not applicable in this domain because of no corresponding image for their
generated graphs in real world. Therefore, such explanations are communicable
to the experts. To follow this vision, we developed KS-GNNExplainer, the first
instance-level graph neural network explainer that leverages current
instance-level approaches in an effective manner to provide more informative
and reliable explainable outputs, which are crucial for applied AI in the
health domain. Our experiments on various datasets, and based on both
quantitative and qualitative measures, demonstrate that the proposed explainer
is capable of being a global pattern extractor, which is a fundamental
limitation of current instance-level approaches in this domain.
Related papers
- Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Exploring Explainability Methods for Graph Neural Networks [0.0]
We demonstrate the applicability of popular explainability approaches on Graph Attention Networks (GATs) for a graph-based super-pixel image classification task.
The results shed a fresh light on the notion of explainability in GNNs, particularly GATs.
arXiv Detail & Related papers (2022-11-03T12:50:46Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - A Survey on Graph-Based Deep Learning for Computational Histopathology [36.58189530598098]
We have witnessed a rapid expansion of the use of machine learning and deep learning for the analysis of digital pathology and biopsy image patches.
Traditional learning over patch-wise features using convolutional neural networks limits the model when attempting to capture global contextual information.
We provide a conceptual grounding of graph-based deep learning and discuss its current success for tumor localization and classification, tumor invasion and staging, image retrieval, and survival prediction.
arXiv Detail & Related papers (2021-07-01T07:50:35Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Quantifying Explainers of Graph Neural Networks in Computational
Pathology [13.526389642048947]
We propose a set of novel quantitative metrics based on statistics of class separability to characterize graph explainers.
We employ the proposed metrics to evaluate three types of graph explainers, namely the layer-wise relevance propagation, gradient-based saliency, and graph pruning approaches.
We validate the qualitative and quantitative findings on the BRACS dataset, a large cohort of breast cancer RoIs, by expert pathologists.
arXiv Detail & Related papers (2020-11-25T11:13:01Z) - Abstracting Deep Neural Networks into Concept Graphs for Concept Level
Interpretability [0.39635467316436124]
We attempt to understand the behavior of trained models that perform image processing tasks in the medical domain by building a graphical representation of the concepts they learn.
We show the application of our proposed implementation on two biomedical problems - brain tumor segmentation and fundus image classification.
arXiv Detail & Related papers (2020-08-14T16:34:32Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.