Dual-Activated Lightweight Attention ResNet50 for Automatic Histopathology Breast Cancer Image Classification
- URL: http://arxiv.org/abs/2308.13150v9
- Date: Mon, 27 May 2024 21:14:43 GMT
- Title: Dual-Activated Lightweight Attention ResNet50 for Automatic Histopathology Breast Cancer Image Classification
- Authors: Suxing Liu,
- Abstract summary: This study introduces a novel method for breast cancer classification, the Dual-Activated Lightweight Attention ResNet50 model.
It integrates a pre-trained ResNet50 model with a lightweight attention mechanism, embedding an attention module in the fourth layer of ResNet50.
The DALAResNet50 method was tested on breast cancer histopathology images from the BreakHis Database across magnification factors of 40X, 100X, 200X, and 400X, achieving accuracies of 98.5%, 98.7%, 97.9%, and 94.3%, respectively.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic breast cancer classification in histopathology images is crucial for precise diagnosis and treatment planning. Recently, classification approaches based on the ResNet architecture have gained popularity for significantly improving accuracy by using skip connections to mitigate vanishing gradient problems, thereby integrating low-level and high-level feature information. Nevertheless, the conventional ResNet architecture faces challenges such as data imbalance and limited interpretability, necessitating cross-domain knowledge and collaboration among medical experts. This study effectively addresses these challenges by introducing a novel method for breast cancer classification, the Dual-Activated Lightweight Attention ResNet50 (DALAResNet50) model. It integrates a pre-trained ResNet50 model with a lightweight attention mechanism, embedding an attention module in the fourth layer of ResNet50 and incorporating two fully connected layers with LeakyReLU and ReLU activation functions to enhance feature learning capabilities. The DALAResNet50 method was tested on breast cancer histopathology images from the BreakHis Database across magnification factors of 40X, 100X, 200X, and 400X, achieving accuracies of 98.5%, 98.7%, 97.9%, and 94.3%, respectively. It was also compared with established deep learning models such as SEResNet50, DenseNet121, VGG16, VGG16Inception, ViT, Swin-Transformer, Dinov2_Vitb14, and ResNet50. The reported results of DALAResNet50 have been shown to outperform the compared approaches regarding accuracy, F1 score, IBA, and GMean, demonstrating significant robustness and broad applicability when dealing with different magnifications and imbalanced breast cancer datasets
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Comparative Analysis of Transfer Learning Models for Breast Cancer Classification [10.677937909900486]
This study investigates the efficiency of deep learning models in distinguishing between Invasive Ductal Carcinoma (IDC) and non-IDC in histopathology slides.
We conducted a comparison examination of eight sophisticated models: ResNet-50, DenseNet-121, ResNeXt-50, Vision Transformer (ViT), GoogLeNet (Inception v3), EfficientNet, MobileNet, and SqueezeNet.
arXiv Detail & Related papers (2024-08-29T18:49:32Z) - ResNet101 and DAE for Enhance Quality and Classification Accuracy in Skin Cancer Imaging [0.0]
We introduce an innovative convolutional ensemble network approach named deep autoencoder (DAE) with ResNet101.
This method utilizes convolution-based deep neural networks for the detection of skin cancer.
The methods result in 96.03% of accuracy, 95.40 % of precision, 96.05% of recall, 0.9576 of F-measure, 0.98 of AUC.
arXiv Detail & Related papers (2024-03-21T09:07:28Z) - OCT-SelfNet: A Self-Supervised Framework with Multi-Modal Datasets for
Generalized and Robust Retinal Disease Detection [2.3349787245442966]
Our research contributes a self-supervised robust machine learning framework, OCT-SelfNet, for detecting eye diseases.
Our method addresses the issue using a two-phase training approach that combines self-supervised pretraining and supervised fine-tuning.
In terms of the AUC-PR metric, our proposed method exceeded 42%, showcasing a substantial increase of at least 10% in performance compared to the baseline.
arXiv Detail & Related papers (2024-01-22T20:17:14Z) - Histopathologic Cancer Detection [0.0]
This work uses the PatchCamelyon benchmark datasets and trains them in a multi-layer perceptron and convolution model to observe the model's performance in terms of precision Recall, F1 Score, Accuracy, and AUC Score.
Also, this paper introduced ResNet50 and InceptionNet models with data augmentation, where ResNet50 is able to beat the state-of-the-art model.
arXiv Detail & Related papers (2023-11-13T19:51:46Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Metastatic Cancer Image Classification Based On Deep Learning Method [7.832709940526033]
We propose a noval method which combines the deep learning algorithm in image classification, the DenseNet169 framework and Rectified Adam optimization algorithm.
Our model achieves superior performance over the other classical convolutional neural networks approaches, such as Vgg19, Resnet34, Resnet50.
arXiv Detail & Related papers (2020-11-13T16:04:39Z) - RetiNerveNet: Using Recursive Deep Learning to Estimate Pointwise 24-2
Visual Field Data based on Retinal Structure [109.33721060718392]
glaucoma is the leading cause of irreversible blindness in the world, affecting over 70 million people.
Due to the Standard Automated Perimetry (SAP) test's innate difficulty and its high test-retest variability, we propose the RetiNerveNet.
arXiv Detail & Related papers (2020-10-15T03:09:08Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.