Human Attention-Guided Explainable Artificial Intelligence for Computer
Vision Models
- URL: http://arxiv.org/abs/2305.03601v1
- Date: Fri, 5 May 2023 15:05:07 GMT
- Title: Human Attention-Guided Explainable Artificial Intelligence for Computer
Vision Models
- Authors: Guoyang Liu, Jindi Zhang, Antoni B. Chan, Janet H. Hsiao
- Abstract summary: We examined whether embedding human attention knowledge into saliency-based explainable AI (XAI) methods could enhance their plausibility and faithfulness.
We first developed new gradient-based XAI methods for object detection models to generate object-specific explanations.
We then developed Human Attention-Guided XAI to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility.
- Score: 38.50257023156464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We examined whether embedding human attention knowledge into saliency-based
explainable AI (XAI) methods for computer vision models could enhance their
plausibility and faithfulness. We first developed new gradient-based XAI
methods for object detection models to generate object-specific explanations by
extending the current methods for image classification models. Interestingly,
while these gradient-based methods worked well for explaining image
classification models, when being used for explaining object detection models,
the resulting saliency maps generally had lower faithfulness than human
attention maps when performing the same task. We then developed Human
Attention-Guided XAI (HAG-XAI) to learn from human attention how to best
combine explanatory information from the models to enhance explanation
plausibility by using trainable activation functions and smoothing kernels to
maximize XAI saliency map's similarity to human attention maps. While for image
classification models, HAG-XAI enhanced explanation plausibility at the expense
of faithfulness, for object detection models it enhanced plausibility and
faithfulness simultaneously and outperformed existing methods. The learned
functions were model-specific, well generalizable to other databases.
Related papers
- Enhancing Counterfactual Image Generation Using Mahalanobis Distance with Distribution Preferences in Feature Space [7.00851481261778]
In the realm of Artificial Intelligence (AI), the importance of Explainable Artificial Intelligence (XAI) is increasingly recognized.
One notable single-instance XAI approach is counterfactual explanation, which aids users in comprehending a model's decisions.
This paper introduces a novel method for computing feature importance within the feature space of a black-box model.
arXiv Detail & Related papers (2024-05-31T08:26:53Z) - Automatic Discovery of Visual Circuits [66.99553804855931]
We explore scalable methods for extracting the subgraph of a vision model's computational graph that underlies recognition of a specific visual concept.
We find that our approach extracts circuits that causally affect model output, and that editing these circuits can defend large pretrained models from adversarial attacks.
arXiv Detail & Related papers (2024-04-22T17:00:57Z) - CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explainable GeoAI: Can saliency maps help interpret artificial
intelligence's learning process? An empirical study on natural feature
detection [4.52308938611108]
This paper compares popular saliency map generation techniques and their strengths and weaknesses in interpreting GeoAI and deep learning models' reasoning behaviors.
The experiments used two GeoAI-ready datasets to demonstrate the generalizability of the research findings.
arXiv Detail & Related papers (2023-03-16T21:37:29Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - A Detailed Study of Interpretability of Deep Neural Network based Top
Taggers [3.8541104292281805]
Recent developments in explainable AI (XAI) allow researchers to explore the inner workings of deep neural networks (DNNs)
We explore interpretability of models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC)
Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models.
arXiv Detail & Related papers (2022-10-09T23:02:42Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Towards Visually Explaining Similarity Models [29.704524987493766]
We present a method to generate gradient-based visual attention for image similarity predictors.
By relying solely on the learned feature embedding, we show that our approach can be applied to any kind of CNN-based similarity architecture.
We show that our resulting attention maps serve more than just interpretability; they can be infused into the model learning process itself with new trainable constraints.
arXiv Detail & Related papers (2020-08-13T17:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.