Explainable Ensemble Machine Learning for Breast Cancer Diagnosis based
on Ultrasound Image Texture Features
- URL: http://arxiv.org/abs/2201.07227v1
- Date: Mon, 17 Jan 2022 22:13:03 GMT
- Title: Explainable Ensemble Machine Learning for Breast Cancer Diagnosis based
on Ultrasound Image Texture Features
- Authors: Alireza Rezazadeh, Yasamin Jafarian and Ali Kord
- Abstract summary: We propose an explainable machine learning pipeline for breast cancer diagnosis based on ultrasound images.
Our results show that our proposed framework achieves high predictive performance while being explainable.
- Score: 4.511923587827301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image classification is widely used to build predictive models for breast
cancer diagnosis. Most existing approaches overwhelmingly rely on deep
convolutional networks to build such diagnosis pipelines. These model
architectures, although remarkable in performance, are black-box systems that
provide minimal insight into the inner logic behind their predictions. This is
a major drawback as the explainability of prediction is vital for applications
such as cancer diagnosis. In this paper, we address this issue by proposing an
explainable machine learning pipeline for breast cancer diagnosis based on
ultrasound images. We extract first- and second-order texture features of the
ultrasound images and use them to build a probabilistic ensemble of decision
tree classifiers. Each decision tree learns to classify the input ultrasound
image by learning a set of robust decision thresholds for texture features of
the image. The decision path of the model predictions can then be interpreted
by decomposing the learned decision trees. Our results show that our proposed
framework achieves high predictive performance while being explainable.
Related papers
- Learning a Clinically-Relevant Concept Bottleneck for Lesion Detection in Breast Ultrasound [0.0]
This work proposes an explainable AI model that provides interpretable predictions using a standard lexicon from the American College of Radiology's Breast Imaging and Reporting Data System (BI-RADS)
The model is a deep neural network featuring a concept bottleneck layer in which known BI-RADS features are predicted before making a final cancer classification.
arXiv Detail & Related papers (2024-06-29T00:44:33Z) - Post-Hoc Explainability of BI-RADS Descriptors in a Multi-task Framework
for Breast Cancer Detection and Segmentation [48.08423125835335]
MT-BI-RADS is a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images.
It offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy.
arXiv Detail & Related papers (2023-08-27T22:07:42Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Interpretable Mammographic Image Classification using Cased-Based
Reasoning and Deep Learning [20.665935997959025]
We present a novel interpretable neural network algorithm that uses case-based reasoning for mammography.
Our network presents both a prediction of malignancy and an explanation of that prediction using known medical features.
arXiv Detail & Related papers (2021-07-12T17:42:09Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Constructing and Evaluating an Explainable Model for COVID-19 Diagnosis
from Chest X-rays [15.664919899567288]
We focus on constructing models to assist a clinician in the diagnosis of COVID-19 patients in situations where it is easier and cheaper to obtain X-ray data than to obtain high-quality images like those from CT scans.
Deep neural networks have repeatedly been shown to be capable of constructing highly predictive models for disease detection directly from image data.
arXiv Detail & Related papers (2020-12-19T21:33:42Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Gleason Grading of Histology Prostate Images through Semantic
Segmentation via Residual U-Net [60.145440290349796]
The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists.
Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue.
The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue.
arXiv Detail & Related papers (2020-05-22T19:49:10Z) - Understanding the robustness of deep neural network classifiers for
breast cancer screening [52.50078591615855]
Deep neural networks (DNNs) show promise in breast cancer screening, but their robustness to input perturbations must be better understood before they can be clinically implemented.
We measure the sensitivity of a radiologist-level screening mammogram image classifier to four commonly studied input perturbations.
We also perform a detailed analysis on the effects of low-pass filtering, and find that it degrades the visibility of clinically meaningful features.
arXiv Detail & Related papers (2020-03-23T01:26:36Z) - Neural networks approach for mammography diagnosis using wavelets
features [1.3750624267664155]
The diagnosis processes are done by transforming the data of the images into a feature vector using wavelets multilevel decomposition.
The suggested model consists of artificial neural networks designed for classifying mammograms according to tumor type and risk level.
arXiv Detail & Related papers (2020-03-06T02:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.