RES: A Robust Framework for Guiding Visual Explanation
- URL: http://arxiv.org/abs/2206.13413v1
- Date: Mon, 27 Jun 2022 16:06:27 GMT
- Title: RES: A Robust Framework for Guiding Visual Explanation
- Authors: Yuyang Gao, Tong Steven Sun, Guangji Bai, Siyi Gu, Sungsoo Ray Hong,
Liang Zhao
- Abstract summary: We propose a framework for guiding visual explanation by developing a novel objective that handles inaccurate boundary, incomplete region, and inconsistent distribution of human annotations.
Experiments on two real-world image datasets demonstrate the effectiveness of the proposed framework on enhancing both the reasonability of the explanation and the performance of the backbones model.
- Score: 8.835733039270364
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the fast progress of explanation techniques in modern Deep Neural
Networks (DNNs) where the main focus is handling "how to generate the
explanations", advanced research questions that examine the quality of the
explanation itself (e.g., "whether the explanations are accurate") and improve
the explanation quality (e.g., "how to adjust the model to generate more
accurate explanations when explanations are inaccurate") are still relatively
under-explored. To guide the model toward better explanations, techniques in
explanation supervision - which add supervision signals on the model
explanation - have started to show promising effects on improving both the
generalizability as and intrinsic interpretability of Deep Neural Networks.
However, the research on supervising explanations, especially in vision-based
applications represented through saliency maps, is in its early stage due to
several inherent challenges: 1) inaccuracy of the human explanation annotation
boundary, 2) incompleteness of the human explanation annotation region, and 3)
inconsistency of the data distribution between human annotation and model
explanation maps. To address the challenges, we propose a generic RES framework
for guiding visual explanation by developing a novel objective that handles
inaccurate boundary, incomplete region, and inconsistent distribution of human
annotations, with a theoretical justification on model generalizability.
Extensive experiments on two real-world image datasets demonstrate the
effectiveness of the proposed framework on enhancing both the reasonability of
the explanation and the performance of the backbone DNNs model.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - Generative Explanations for Graph Neural Network: Methods and
Evaluations [16.67839967139831]
Graph Neural Networks (GNNs) achieve state-of-the-art performance in various graph-related tasks.
The black-box nature of GNNs limits their interpretability and trustworthiness.
Numerous explainability methods have been proposed to uncover the decision-making logic of GNNs.
arXiv Detail & Related papers (2023-11-09T22:07:15Z) - D4Explainer: In-Distribution GNN Explanations via Discrete Denoising
Diffusion [12.548966346327349]
Graph Neural Networks (GNNs) play a vital role in model auditing and ensuring trustworthy graph learning.
D4Explainer is a novel approach that provides in-distribution GNN explanations for both counterfactual and model-level explanation scenarios.
It is the first unified framework that combines both counterfactual and model-level explanations.
arXiv Detail & Related papers (2023-10-30T07:41:42Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Explaining, Evaluating and Enhancing Neural Networks' Learned
Representations [2.1485350418225244]
We show how explainability can be an aid, rather than an obstacle, towards better and more efficient representations.
We employ such attributions to define two novel scores for evaluating the informativeness and the disentanglement of latent embeddings.
We show that adopting our proposed scores as constraints during the training of a representation learning task improves the downstream performance of the model.
arXiv Detail & Related papers (2022-02-18T19:00:01Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.