Domain aware medical image classifier interpretation by counterfactual
impact analysis
- URL: http://arxiv.org/abs/2007.06312v2
- Date: Thu, 1 Oct 2020 16:55:12 GMT
- Title: Domain aware medical image classifier interpretation by counterfactual
impact analysis
- Authors: Dimitrios Lenis, David Major, Maria Wimmer, Astrid Berg, Gert Sluiter,
and Katja B\"uhler
- Abstract summary: We introduce a neural-network based attribution method, applicable to any trained predictor.
Our solution identifies salient regions of an input image in a single forward-pass by measuring the effect of local image-perturbations on a predictor's score.
- Score: 2.512212190779389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of machine learning methods for computer vision tasks has driven
a surge in computer assisted prediction for medicine and biology. Based on a
data-driven relationship between input image and pathological classification,
these predictors deliver unprecedented accuracy. Yet, the numerous approaches
trying to explain the causality of this learned relationship have fallen short:
time constraints, coarse, diffuse and at times misleading results, caused by
the employment of heuristic techniques like Gaussian noise and blurring, have
hindered their clinical adoption.
In this work, we discuss and overcome these obstacles by introducing a
neural-network based attribution method, applicable to any trained predictor.
Our solution identifies salient regions of an input image in a single
forward-pass by measuring the effect of local image-perturbations on a
predictor's score. We replace heuristic techniques with a strong neighborhood
conditioned inpainting approach, avoiding anatomically implausible, hence
adversarial artifacts. We evaluate on public mammography data and compare
against existing state-of-the-art methods. Furthermore, we exemplify the
approach's generalizability by demonstrating results on chest X-rays. Our
solution shows, both quantitatively and qualitatively, a significant reduction
of localization ambiguity and clearer conveying results, without sacrificing
time efficiency.
Related papers
- Advancements in Feature Extraction Recognition of Medical Imaging Systems Through Deep Learning Technique [0.36651088217486427]
An objective function based on weight is proposed to achieve the purpose of fast image recognition.
A technique for threshold optimization utilizing a simplex algorithm is presented.
It is found that different types of objects are independent of each other and compact in image processing.
arXiv Detail & Related papers (2024-05-23T04:46:51Z) - Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method [4.303291247305105]
We improve the supervised contrastive learning method by leveraging both image-level labels and domain-specific augmentations to enhance model robustness.
We evaluate our method on the BreakHis dataset, which consists of breast cancer histopathology images.
This improvement corresponds to 93.63% absolute accuracy, highlighting the effectiveness of our approach in leveraging properties of data to learn more appropriate representation space.
arXiv Detail & Related papers (2024-05-06T17:06:11Z) - Adversarial-Robust Transfer Learning for Medical Imaging via Domain
Assimilation [17.46080957271494]
The scarcity of publicly available medical images has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images.
A significant em domain discrepancy exists between natural and medical images, which causes AI models to exhibit heightened em vulnerability to adversarial attacks.
This paper proposes a em domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion.
arXiv Detail & Related papers (2024-02-25T06:39:15Z) - GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing
Methods [9.152759278163954]
This work presents two novel intra-processing techniques based on fine-tuning and pruning an already-trained neural network.
To the best of our knowledge, this is one of the first efforts studying debiasing methods on chest radiographs.
arXiv Detail & Related papers (2022-07-26T10:18:59Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Interpreting Medical Image Classifiers by Optimization Based
Counterfactual Impact Analysis [2.512212190779389]
We present a model saliency mapping framework tailored to medical imaging.
We replace techniques with a strong neighborhood conditioned inpainting approach, which avoids implausible artefacts.
arXiv Detail & Related papers (2020-04-03T14:59:08Z) - Generalization Bounds and Representation Learning for Estimation of
Potential Outcomes and Causal Effects [61.03579766573421]
We study estimation of individual-level causal effects, such as a single patient's response to alternative medication.
We devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance.
We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances.
arXiv Detail & Related papers (2020-01-21T10:16:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.