An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization
- URL: http://arxiv.org/abs/2002.07613v1
- Date: Thu, 13 Feb 2020 15:28:42 GMT
- Title: An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization
- Authors: Yiqiu Shen, Nan Wu, Jason Phang, Jungkyu Park, Kangning Liu,
Sudarshini Tyagi, Laura Heacock, S. Gene Kim, Linda Moy, Kyunghyun Cho,
Krzysztof J. Geras
- Abstract summary: We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
- Score: 45.00998416720726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical images differ from natural images in significantly higher resolutions
and smaller regions of interest. Because of these differences, neural network
architectures that work well for natural images might not be applicable to
medical image analysis. In this work, we extend the globally-aware multiple
instance classifier, a framework we proposed to address these unique properties
of medical images. This model first uses a low-capacity, yet memory-efficient,
network on the whole image to identify the most informative regions. It then
applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local
information to make a final prediction. While existing methods often require
lesion segmentation during training, our model is trained with only image-level
labels and can generate pixel-level saliency maps indicating possible malignant
findings. We apply the model to screening mammography interpretation:
predicting the presence or absence of benign and malignant lesions. On the NYU
Breast Cancer Screening Dataset, consisting of more than one million images,
our model achieves an AUC of 0.93 in classifying breasts with malignant
findings, outperforming ResNet-34 and Faster R-CNN. Compared to ResNet-34, our
model is 4.1x faster for inference while using 78.4% less GPU memory.
Furthermore, we demonstrate, in a reader study, that our model surpasses
radiologist-level AUC by a margin of 0.11. The proposed model is available
online: https://github.com/nyukat/GMIC.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - WATUNet: A Deep Neural Network for Segmentation of Volumetric Sweep
Imaging Ultrasound [1.2903292694072621]
Volume sweep imaging (VSI) is an innovative approach that enables untrained operators to capture quality ultrasound images.
We present a novel segmentation model known as Wavelet_Attention_UNet (WATUNet)
In this model, we incorporate wavelet gates (WGs) and attention gates (AGs) between the encoder and decoder instead of a simple connection to overcome the limitations mentioned.
arXiv Detail & Related papers (2023-11-17T20:32:37Z) - Application of Transfer Learning and Ensemble Learning in Image-level
Classification for Breast Histopathology [9.037868656840736]
In Computer-Aided Diagnosis (CAD), traditional classification models mostly use a single network to extract features.
This paper proposes a deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions.
Result: In the ensemble network model with accuracy as the weight, the image-level binary classification achieves an accuracy of $98.90%$.
arXiv Detail & Related papers (2022-04-18T13:31:53Z) - Feature-enhanced Adversarial Semi-supervised Semantic Segmentation
Network for Pulmonary Embolism Annotation [6.142272540492936]
This study established a feature-enhanced adversarial semi-supervised semantic segmentation model to automatically annotate pulmonary embolism lesion areas.
In current studies, all of the PEA image segmentation methods are trained by supervised learning.
This study proposed a semi-supervised learning method to make the model applicable to different datasets by adding a small amount of unlabeled images.
arXiv Detail & Related papers (2022-04-08T04:21:02Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - Weakly-supervised High-resolution Segmentation of Mammography Images for
Breast Cancer Diagnosis [17.936019428281586]
In cancer diagnosis, interpretability can be achieved by localizing the region of the input image responsible for the output.
We introduce a novel neural network architecture to perform weakly-supervised segmentation of high-resolution images.
We apply this model to breast cancer diagnosis with screening mammography, and validate it on a large clinically-realistic dataset.
arXiv Detail & Related papers (2021-06-13T17:25:21Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Boosted EfficientNet: Detection of Lymph Node Metastases in Breast
Cancer Using Convolutional Neural Network [6.444922476853511]
The Convolutional Neutral Network (CNN) has been adapted to predict and classify lymph node metastasis in breast cancer.
We propose a novel data augmentation method named Random Center Cropping (RCC) to facilitate small resolution images.
arXiv Detail & Related papers (2020-10-10T15:18:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.