Advanced Hybrid Deep Learning Model for Enhanced Classification of Osteosarcoma Histopathology Images
- URL: http://arxiv.org/abs/2411.00832v1
- Date: Tue, 29 Oct 2024 13:54:08 GMT
- Title: Advanced Hybrid Deep Learning Model for Enhanced Classification of Osteosarcoma Histopathology Images
- Authors: Arezoo Borji, Gernot Kronreif, Bernhard Angermayr, Sepideh Hatamikia,
- Abstract summary: This study focuses on osteosarcoma (OS), the most common bone cancer in children and adolescents, which affects the long bones of the arms and legs.
We propose a novel hybrid model that combines convolutional neural networks (CNN) and vision transformers (ViT) to improve diagnostic accuracy for OS.
The model achieved an accuracy of 99.08%, precision of 99.10%, recall of 99.28%, and an F1-score of 99.23%.
- Score: 0.0
- License:
- Abstract: Recent advances in machine learning are transforming medical image analysis, particularly in cancer detection and classification. Techniques such as deep learning, especially convolutional neural networks (CNNs) and vision transformers (ViTs), are now enabling the precise analysis of complex histopathological images, automating detection, and enhancing classification accuracy across various cancer types. This study focuses on osteosarcoma (OS), the most common bone cancer in children and adolescents, which affects the long bones of the arms and legs. Early and accurate detection of OS is essential for improving patient outcomes and reducing mortality. However, the increasing prevalence of cancer and the demand for personalized treatments create challenges in achieving precise diagnoses and customized therapies. We propose a novel hybrid model that combines convolutional neural networks (CNN) and vision transformers (ViT) to improve diagnostic accuracy for OS using hematoxylin and eosin (H&E) stained histopathological images. The CNN model extracts local features, while the ViT captures global patterns from histopathological images. These features are combined and classified using a Multi-Layer Perceptron (MLP) into four categories: non-tumor (NT), non-viable tumor (NVT), viable tumor (VT), and none-viable ratio (NVR). Using the Cancer Imaging Archive (TCIA) dataset, the model achieved an accuracy of 99.08%, precision of 99.10%, recall of 99.28%, and an F1-score of 99.23%. This is the first successful four-class classification using this dataset, setting a new benchmark in OS research and offering promising potential for future diagnostic advancements.
Related papers
- Enhancing Brain Tumor Classification Using TrAdaBoost and Multi-Classifier Deep Learning Approaches [0.0]
Brain tumors pose a serious health threat due to their rapid growth and potential for metastasis.
This study aims to improve the efficiency and accuracy of brain tumor classification.
Our approach combines state-of-the-art deep learning algorithms, including the Vision Transformer (ViT), Capsule Neural Network (CapsNet), and convolutional neural networks (CNNs) such as ResNet-152 and VGG16.
arXiv Detail & Related papers (2024-10-31T07:28:06Z) - A study on deep feature extraction to detect and classify Acute Lymphoblastic Leukemia (ALL) [0.0]
Acute lymphoblastic leukaemia (ALL) is a blood malignancy that mainly affects adults and children.
This study looks into the use of deep learning, specifically Convolutional Neural Networks (CNNs) for the detection and classification of ALL.
With an 87% accuracy rate, the ResNet101 model produced the best results, closely followed by DenseNet121 and VGG19.
arXiv Detail & Related papers (2024-09-10T17:53:29Z) - Improving Performance in Colorectal Cancer Histology Decomposition using Deep and Ensemble Machine Learning [0.7082642128219231]
Histologic samples stained with hematoxylin and eosin are commonly used in colorectal cancer management.
Recent research highlights the potential of convolutional neural networks (CNNs) in facilitating the extraction of clinically relevant biomarkers from readily available images.
CNN-based biomarkers can predict patient outcomes comparably to golden standards, with the added advantages of speed, automation, and minimal cost.
arXiv Detail & Related papers (2023-10-25T19:46:27Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Brain Tumor Detection and Classification Using a New Evolutionary
Convolutional Neural Network [18.497065020090062]
The goal of this study is to employ brain MRI images to distinguish between healthy and unhealthy patients.
Deep learning techniques have recently sparked interest as a means of diagnosing brain tumours more accurately and robustly.
arXiv Detail & Related papers (2022-04-26T13:20:42Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - A Deep Learning Study on Osteosarcoma Detection from Histological Images [6.341765152919201]
The most common type of primary malignant bone tumor is osteosarcoma.
CNNs can significantly decrease surgeon's workload and make a better prognosis of patient conditions.
CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance.
arXiv Detail & Related papers (2020-11-02T18:16:17Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Spatio-spectral deep learning methods for in-vivo hyperspectral
laryngeal cancer detection [49.32653090178743]
Early detection of head and neck tumors is crucial for patient survival.
Hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors.
We present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI.
arXiv Detail & Related papers (2020-04-21T17:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.