Interpretable Mammographic Image Classification using Cased-Based
Reasoning and Deep Learning
- URL: http://arxiv.org/abs/2107.05605v1
- Date: Mon, 12 Jul 2021 17:42:09 GMT
- Title: Interpretable Mammographic Image Classification using Cased-Based
Reasoning and Deep Learning
- Authors: Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen,
Yinhao Ren, Joseph Y. Lo, Cynthia Rudin
- Abstract summary: We present a novel interpretable neural network algorithm that uses case-based reasoning for mammography.
Our network presents both a prediction of malignancy and an explanation of that prediction using known medical features.
- Score: 20.665935997959025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When we deploy machine learning models in high-stakes medical settings, we
must ensure these models make accurate predictions that are consistent with
known medical science. Inherently interpretable networks address this need by
explaining the rationale behind each decision while maintaining equal or higher
accuracy compared to black-box models. In this work, we present a novel
interpretable neural network algorithm that uses case-based reasoning for
mammography. Designed to aid a radiologist in their decisions, our network
presents both a prediction of malignancy and an explanation of that prediction
using known medical features. In order to yield helpful explanations, the
network is designed to mimic the reasoning processes of a radiologist: our
network first detects the clinically relevant semantic features of each image
by comparing each new image with a learned set of prototypical image parts from
the training images, then uses those clinical features to predict malignancy.
Compared to other methods, our model detects clinical features (mass margins)
with equal or higher accuracy, provides a more detailed explanation of its
prediction, and is better able to differentiate the classification-relevant
parts of the image.
Related papers
- Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Development of an algorithm for medical image segmentation of bone
tissue in interaction with metallic implants [58.720142291102135]
This study develops an algorithm for calculating bone growth in contact with metallic implants.
Bone and implant tissue were manually segmented in the training data set.
In terms of network accuracy, the model reached around 98%.
arXiv Detail & Related papers (2022-04-22T08:17:20Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Explainable Ensemble Machine Learning for Breast Cancer Diagnosis based
on Ultrasound Image Texture Features [4.511923587827301]
We propose an explainable machine learning pipeline for breast cancer diagnosis based on ultrasound images.
Our results show that our proposed framework achieves high predictive performance while being explainable.
arXiv Detail & Related papers (2022-01-17T22:13:03Z) - IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography [20.665935997959025]
Interpretability in machine learning models is important in high-stakes decisions.
We present a framework for interpretable machine learning-based mammography.
arXiv Detail & Related papers (2021-03-23T05:00:21Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Using StyleGAN for Visual Interpretability of Deep Learning Models on
Medical Images [0.7874708385247352]
We propose a new interpretability method that can be used to understand the predictions of any black-box model on images.
A StyleGAN is trained on medical images to provide a mapping between latent vectors and images.
By shifting the latent representation of an input image along this direction, we can produce a series of new synthetic images with changed predictions.
arXiv Detail & Related papers (2021-01-19T11:13:20Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z) - An Investigation of Interpretability Techniques for Deep Learning in
Predictive Process Analytics [2.162419921663162]
This paper explores interpretability techniques for two of the most successful learning algorithms in medical decision-making literature: deep neural networks and random forests.
We learn models that try to predict the type of cancer of the patient, given their set of medical activity records.
We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well.
arXiv Detail & Related papers (2020-02-21T09:14:34Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.