IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography
- URL: http://arxiv.org/abs/2103.12308v1
- Date: Tue, 23 Mar 2021 05:00:21 GMT
- Title: IAIA-BL: A Case-based Interpretable Deep Learning Model for
Classification of Mass Lesions in Digital Mammography
- Authors: Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen,
Yinhao Ren, Joseph Y. Lo and Cynthia Rudin
- Abstract summary: Interpretability in machine learning models is important in high-stakes decisions.
We present a framework for interpretable machine learning-based mammography.
- Score: 20.665935997959025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretability in machine learning models is important in high-stakes
decisions, such as whether to order a biopsy based on a mammographic exam.
Mammography poses important challenges that are not present in other computer
vision tasks: datasets are small, confounding information is present, and it
can be difficult even for a radiologist to decide between watchful waiting and
biopsy based on a mammogram alone. In this work, we present a framework for
interpretable machine learning-based mammography. In addition to predicting
whether a lesion is malignant or benign, our work aims to follow the reasoning
processes of radiologists in detecting clinically relevant semantic features of
each image, such as the characteristics of the mass margins. The framework
includes a novel interpretable neural network algorithm that uses case-based
reasoning for mammography. Our algorithm can incorporate a combination of data
with whole image labelling and data with pixel-wise annotations, leading to
better accuracy and interpretability even with a small number of images. Our
interpretable models are able to highlight the classification-relevant parts of
the image, whereas other methods highlight healthy tissue and confounding
information. Our models are decision aids, rather than decision makers, aimed
at better overall human-machine collaboration. We do not observe a loss in mass
margin classification accuracy over a black box neural network trained on the
same data.
Related papers
- FPN-IAIA-BL: A Multi-Scale Interpretable Deep Learning Model for Classification of Mass Margins in Digital Mammography [17.788748860485438]
Uninterpretable deep learning models are unsuitable in high-stakes environments.
Recent work in interpretable computer vision provides transparency to these formerly black boxes.
This paper proposes a novel multi-scale interpretable deep learning model for mammographic mass margin classification.
arXiv Detail & Related papers (2024-06-10T15:44:41Z) - Multi-task Explainable Skin Lesion Classification [54.76511683427566]
We propose a few-shot-based approach for skin lesions that generalizes well with few labelled data.
The proposed approach comprises a fusion of a segmentation network that acts as an attention module and classification network.
arXiv Detail & Related papers (2023-10-11T05:49:47Z) - Exploiting Causality Signals in Medical Images: A Pilot Study with
Empirical Results [1.2400966570867322]
We present a novel technique to discover and exploit weak causal signals directly from images via neural networks for classification purposes.
This way, we model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image.
Our method consists of a convolutional neural network backbone and a causality-factors extractor module, which computes weights to enhance each feature map according to its causal influence in the scene.
arXiv Detail & Related papers (2023-09-19T08:00:26Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Application of Transfer Learning and Ensemble Learning in Image-level
Classification for Breast Histopathology [9.037868656840736]
In Computer-Aided Diagnosis (CAD), traditional classification models mostly use a single network to extract features.
This paper proposes a deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions.
Result: In the ensemble network model with accuracy as the weight, the image-level binary classification achieves an accuracy of $98.90%$.
arXiv Detail & Related papers (2022-04-18T13:31:53Z) - Interpretable Mammographic Image Classification using Cased-Based
Reasoning and Deep Learning [20.665935997959025]
We present a novel interpretable neural network algorithm that uses case-based reasoning for mammography.
Our network presents both a prediction of malignancy and an explanation of that prediction using known medical features.
arXiv Detail & Related papers (2021-07-12T17:42:09Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.