Auto-weighting for Breast Cancer Classification in Multimodal Ultrasound
- URL: http://arxiv.org/abs/2008.03435v1
- Date: Sat, 8 Aug 2020 03:42:00 GMT
- Title: Auto-weighting for Breast Cancer Classification in Multimodal Ultrasound
- Authors: Wang Jian, Miao Juzheng, Yang Xin, Li Rui, Zhou Guangquan, Huang
Yuhao, Lin Zehui, Xue Wufeng, Jia Xiaohong, Zhou Jianqiao, Huang Ruobing, Ni
Dong
- Abstract summary: We propose an automatic way to combine the four types of ultrasonography to discriminate between benign and malignant breast nodules.
A novel multimodal network is proposed, along with promising learnability and simplicity to improve classification accuracy.
Results showed that the model scored a high classification accuracy of 95.4%, which indicates the efficiency of the proposed method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Breast cancer is the most common invasive cancer in women. Besides the
primary B-mode ultrasound screening, sonographers have explored the inclusion
of Doppler, strain and shear-wave elasticity imaging to advance the diagnosis.
However, recognizing useful patterns in all types of images and weighing up the
significance of each modality can elude less-experienced clinicians. In this
paper, we explore, for the first time, an automatic way to combine the four
types of ultrasonography to discriminate between benign and malignant breast
nodules. A novel multimodal network is proposed, along with promising
learnability and simplicity to improve classification accuracy. The key is
using a weight-sharing strategy to encourage interactions between modalities
and adopting an additional cross-modalities objective to integrate global
information. In contrast to hardcoding the weights of each modality in the
model, we embed it in a Reinforcement Learning framework to learn this
weighting in an end-to-end manner. Thus the model is trained to seek the
optimal multimodal combination without handcrafted heuristics. The proposed
framework is evaluated on a dataset contains 1616 set of multimodal images.
Results showed that the model scored a high classification accuracy of 95.4%,
which indicates the efficiency of the proposed method.
Related papers
- Multi-modality transrectal ultrasound video classification for
identification of clinically significant prostate cancer [4.896561300855359]
We propose a framework for the classification of clinically significant prostate cancer (csPCa) from multi-modality TRUS videos.
The proposed framework is evaluated on an in-house dataset containing 512 TRUS videos.
arXiv Detail & Related papers (2024-02-14T07:06:30Z) - MUVF-YOLOX: A Multi-modal Ultrasound Video Fusion Network for Renal
Tumor Diagnosis [10.452919030855796]
We propose a novel multi-modal ultrasound video fusion network that can effectively perform multi-modal feature fusion and video classification for renal tumor diagnosis.
Experimental results on a multicenter dataset show that the proposed framework outperforms the single-modal models and the competing methods.
arXiv Detail & Related papers (2023-07-15T14:15:42Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Application of Transfer Learning and Ensemble Learning in Image-level
Classification for Breast Histopathology [9.037868656840736]
In Computer-Aided Diagnosis (CAD), traditional classification models mostly use a single network to extract features.
This paper proposes a deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions.
Result: In the ensemble network model with accuracy as the weight, the image-level binary classification achieves an accuracy of $98.90%$.
arXiv Detail & Related papers (2022-04-18T13:31:53Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Classification of Breast Cancer Lesions in Ultrasound Images by using
Attention Layer and loss Ensembles in Deep Convolutional Neural Networks [0.0]
We propose a new framework for classification of breast cancer lesions by use of an attention module in modified VGG16 architecture.
We also proposed new ensembled loss function which is the combination of binary cross-entropy and logarithm of the hyperbolic cosine loss to improve the model discrepancy between classified lesions and its labels.
The proposed model in this study outperformed other modified VGG16 architectures with the accuracy of 93% and also the results are competitive with other state of the art frameworks for classification of breast cancer lesions.
arXiv Detail & Related papers (2021-02-23T06:49:12Z) - Automatic Breast Lesion Classification by Joint Neural Analysis of
Mammography and Ultrasound [1.9814912982226993]
We propose a deep-learning based method for classifying breast cancer lesions from their respective mammography and ultrasound images.
The proposed approach is based on a GoogleNet architecture, fine-tuned for our data in two training steps.
It achieves an AUC of 0.94, outperforming state-of-the-art models trained over a single modality.
arXiv Detail & Related papers (2020-09-23T09:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.