Automatic Breast Lesion Classification by Joint Neural Analysis of
Mammography and Ultrasound
- URL: http://arxiv.org/abs/2009.11009v1
- Date: Wed, 23 Sep 2020 09:08:24 GMT
- Title: Automatic Breast Lesion Classification by Joint Neural Analysis of
Mammography and Ultrasound
- Authors: Gavriel Habib, Nahum Kiryati, Miri Sklair-Levy, Anat Shalmon, Osnat
Halshtok Neiman, Renata Faermann Weidenfeld, Yael Yagil, Eli Konen, Arnaldo
Mayer
- Abstract summary: We propose a deep-learning based method for classifying breast cancer lesions from their respective mammography and ultrasound images.
The proposed approach is based on a GoogleNet architecture, fine-tuned for our data in two training steps.
It achieves an AUC of 0.94, outperforming state-of-the-art models trained over a single modality.
- Score: 1.9814912982226993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mammography and ultrasound are extensively used by radiologists as
complementary modalities to achieve better performance in breast cancer
diagnosis. However, existing computer-aided diagnosis (CAD) systems for the
breast are generally based on a single modality. In this work, we propose a
deep-learning based method for classifying breast cancer lesions from their
respective mammography and ultrasound images. We present various approaches and
show a consistent improvement in performance when utilizing both modalities.
The proposed approach is based on a GoogleNet architecture, fine-tuned for our
data in two training steps. First, a distinct neural network is trained
separately for each modality, generating high-level features. Then, the
aggregated features originating from each modality are used to train a
multimodal network to provide the final classification. In quantitative
experiments, the proposed approach achieves an AUC of 0.94, outperforming
state-of-the-art models trained over a single modality. Moreover, it performs
similarly to an average radiologist, surpassing two out of four radiologists
participating in a reader study. The promising results suggest that the
proposed method may become a valuable decision support tool for breast
radiologists.
Related papers
- Federated Learning with Research Prototypes for Multi-Center MRI-based
Detection of Prostate Cancer with Diverse Histopathology [3.8613414331251423]
We introduce a flexible federated learning framework for cross-site training, validation, and evaluation of deep prostate cancer detection algorithms.
Our results show increases in prostate cancer detection and classification accuracy using a specialized neural network model and diverse prostate biopsy data.
We open-source our FLtools system, which can be easily adapted to other deep learning projects for medical imaging.
arXiv Detail & Related papers (2022-06-11T21:28:17Z) - RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for
Adaptive Radiotherapy [1.8161758803237067]
We develop a multimodal late fusion approach to predict radiation therapy outcomes for non-small-cell lung cancer patients.
Experiments show that the proposed multimodal paradigm with an AUC equal to $90.9%$ outperforms each unimodal approach.
arXiv Detail & Related papers (2022-04-26T16:32:52Z) - Multi-View Hypercomplex Learning for Breast Cancer Screening [7.147856898682969]
Traditionally, deep learning methods for breast cancer classification perform a single-view analysis.
radiologists simultaneously analyze all four views that compose a mammography exam.
We propose a methodological approach for multi-view breast cancer classification based on parameterized hypercomplex neural networks.
arXiv Detail & Related papers (2022-04-12T13:32:31Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z) - RCA-IUnet: A residual cross-spatial attention guided inception U-Net
model for tumor segmentation in breast ultrasound imaging [0.6091702876917281]
The article introduces an efficient residual cross-spatial attention guided inception U-Net (RCA-IUnet) model with minimal training parameters for tumor segmentation.
The RCA-IUnet model follows U-Net topology with residual inception depth-wise separable convolution and hybrid pooling layers.
Cross-spatial attention filters are added to suppress the irrelevant features and focus on the target structure.
arXiv Detail & Related papers (2021-08-05T10:35:06Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Auto-weighting for Breast Cancer Classification in Multimodal Ultrasound [0.0]
We propose an automatic way to combine the four types of ultrasonography to discriminate between benign and malignant breast nodules.
A novel multimodal network is proposed, along with promising learnability and simplicity to improve classification accuracy.
Results showed that the model scored a high classification accuracy of 95.4%, which indicates the efficiency of the proposed method.
arXiv Detail & Related papers (2020-08-08T03:42:00Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.