BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images
- URL: http://arxiv.org/abs/2110.04069v1
- Date: Tue, 5 Oct 2021 19:14:46 GMT
- Title: BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images
- Authors: Boyu Zhang, Aleksandar Vakanski, Min Xian
- Abstract summary: This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
- Score: 69.41441138140895
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In healthcare, it is essential to explain the decision-making process of
machine learning models to establish the trustworthiness of clinicians. This
paper introduces BI-RADS-Net, a novel explainable deep learning approach for
cancer detection in breast ultrasound images. The proposed approach
incorporates tasks for explaining and classifying breast tumors, by learning
feature representations relevant to clinical diagnosis. Explanations of the
predictions (benign or malignant) are provided in terms of morphological
features that are used by clinicians for diagnosis and reporting in medical
practice. The employed features include the BI-RADS descriptors of shape,
orientation, margin, echo pattern, and posterior features. Additionally, our
approach predicts the likelihood of malignancy of the findings, which relates
to the BI-RADS assessment category reported by clinicians. Experimental
validation on a dataset consisting of 1,192 images indicates improved model
accuracy, supported by explanations in clinical terms using the BI-RADS
lexicon.
Related papers
- Deep BI-RADS Network for Improved Cancer Detection from Mammograms [3.686808512438363]
We introduce a novel multi-modal approach that combines textual BI-RADS lesion descriptors with visual mammogram content.
Our method employs iterative attention layers to effectively fuse these different modalities.
Experiments on the CBIS-DDSM dataset demonstrate substantial improvements across all metrics.
arXiv Detail & Related papers (2024-11-16T21:32:51Z) - Post-Hoc Explainability of BI-RADS Descriptors in a Multi-task Framework
for Breast Cancer Detection and Segmentation [48.08423125835335]
MT-BI-RADS is a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images.
It offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy.
arXiv Detail & Related papers (2023-08-27T22:07:42Z) - Weakly Supervised Lesion Detection and Diagnosis for Breast Cancers with
Partially Annotated Ultrasound Images [19.374895481597466]
Two-Stage Detection and Diagnosis Network (TSDDNet) is proposed based on weakly supervised learning to enhance diagnostic accuracy.
The proposed TSDDNet is evaluated on a B-mode ultrasound dataset, and the experimental results show that it achieves the best performance on both lesion detection and diagnosis tasks.
arXiv Detail & Related papers (2023-06-12T09:26:54Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Act Like a Radiologist: Towards Reliable Multi-view Correspondence
Reasoning for Mammogram Mass Detection [49.14070210387509]
We propose an Anatomy-aware Graph convolutional Network (AGN) for mammogram mass detection.
AGN is tailored for mammogram mass detection and endows existing detection methods with multi-view reasoning ability.
Experiments on two standard benchmarks reveal that AGN significantly exceeds the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-21T06:48:34Z) - Fusing Medical Image Features and Clinical Features with Deep Learning
for Computer-Aided Diagnosis [7.99493100852929]
We propose a novel deep learning-based method for fusing MRI/CT images and clinical information for diagnostic tasks.
We evaluate the proposed method on its applications to Alzheimer's disease diagnosis, mild cognitive impairment converter prediction and hepatic microvascular invasion diagnosis.
arXiv Detail & Related papers (2021-03-10T03:37:21Z) - Deep Learning Based Decision Support for Medicine -- A Case Study on
Skin Cancer Diagnosis [6.820831423843006]
Clinical application of Deep Learning-based Decision Support Systems for skin cancer screening has the potential to improve the quality of patient care.
This paper provides an overview of works towards explainable, DL-based decision support in medical applications with the example of skin cancer diagnosis from clinical, dermoscopic and histopathologic images.
arXiv Detail & Related papers (2021-03-02T11:07:49Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.