A Novel Interaction-based Methodology Towards Explainable AI with Better
Understanding of Pneumonia Chest X-ray Images
- URL: http://arxiv.org/abs/2104.12672v1
- Date: Mon, 19 Apr 2021 23:02:43 GMT
- Title: A Novel Interaction-based Methodology Towards Explainable AI with Better
Understanding of Pneumonia Chest X-ray Images
- Authors: Shaw-Hwa Lo, Yiqiao Yin
- Abstract summary: This paper proposes an interaction-based methodology -- Influence Score (I-score) -- to screen out the noisy and non-informative variables in the images.
We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the field of eXplainable AI (XAI), robust ``blackbox'' algorithms such as
Convolutional Neural Networks (CNNs) are known for making high prediction
performance. However, the ability to explain and interpret these algorithms
still require innovation in the understanding of influential and, more
importantly, explainable features that directly or indirectly impact the
performance of predictivity. A number of methods existing in literature focus
on visualization techniques but the concepts of explainability and
interpretability still require rigorous definition. In view of the above needs,
this paper proposes an interaction-based methodology -- Influence Score
(I-score) -- to screen out the noisy and non-informative variables in the
images hence it nourishes an environment with explainable and interpretable
features that are directly associated to feature predictivity. We apply the
proposed method on a real world application in Pneumonia Chest X-ray Image data
set and produced state-of-the-art results. We demonstrate how to apply the
proposed approach for more general big data problems by improving the
explainability and interpretability without sacrificing the prediction
performance. The contribution of this paper opens a novel angle that moves the
community closer to the future pipelines of XAI problems.
Related papers
- Controllable Edge-Type-Specific Interpretation in Multi-Relational Graph Neural Networks for Drug Response Prediction [6.798254568821052]
We propose a novel post-hoc interpretability algorithm for cancer drug response prediction, CETExplainer.
It incorporates a controllable edge-type-specific weighting mechanism to provide fine-grained, biologically meaningful explanations for predictive models.
Empirical analysis on the real-world dataset demonstrates that CETExplainer achieves superior stability and improves explanation quality compared to leading algorithms.
arXiv Detail & Related papers (2024-08-30T09:14:38Z) - Explainable Deep Learning Framework for Human Activity Recognition [3.9146761527401424]
We propose a model-agnostic framework that enhances interpretability and efficacy of HAR models.
By implementing competitive data augmentation, our framework provides intuitive and accessible explanations of model decisions.
arXiv Detail & Related papers (2024-08-21T11:59:55Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Towards a Shapley Value Graph Framework for Medical peer-influence [0.9449650062296824]
This paper introduces a new framework to look deeper into explanations using graph representation for feature-to-feature interactions.
It aims to improve the interpretability of black-box Machine Learning (ML) models and inform intervention.
arXiv Detail & Related papers (2021-12-29T16:24:50Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.