Subclass Classification of Gliomas Using MRI Fusion Technique
- URL: http://arxiv.org/abs/2502.18775v1
- Date: Wed, 26 Feb 2025 03:10:33 GMT
- Title: Subclass Classification of Gliomas Using MRI Fusion Technique
- Authors: Kiranmayee Janardhan, Christy Bobby Thomas,
- Abstract summary: Glioma, the prevalent primary brain tumor, exhibits diverse aggressiveness levels and prognoses.<n>This study aims to develop an algorithm to fuse the MRI images from T1, T2, T1ce, and fluid-attenuated inversion recovery sequences.<n>The proposed method achieved a classification of accuracy of 99.25%, precision of 99.30%, recall of 99.10, F1 score of 99.19%, Intersection Over Union of 84.49%, and specificity of 99.76.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Glioma, the prevalent primary brain tumor, exhibits diverse aggressiveness levels and prognoses. Precise classification of glioma is paramount for treatment planning and predicting prognosis. This study aims to develop an algorithm to fuse the MRI images from T1, T2, T1ce, and fluid-attenuated inversion recovery (FLAIR) sequences to enhance the efficacy of glioma subclass classification as no tumor, necrotic core, peritumoral edema, and enhancing tumor. The MRI images from BraTS datasets were used in this work. The images were pre-processed using max-min normalization to ensure consistency in pixel intensity values across different images. The segmentation of the necrotic core, peritumoral edema, and enhancing tumor was performed on 2D and 3D images separately using UNET architecture. Further, the segmented regions from multimodal MRI images were fused using the weighted averaging technique. Integrating 2D and 3D segmented outputs enhances classification accuracy by capturing detailed features like tumor shape, boundaries, and intensity distribution in slices, while also providing a comprehensive view of spatial extent, shape, texture, and localization within the brain volume. The fused images were used as input to the pre-trained ResNet50 model for glioma subclass classification. The network is trained on 80% and validated on 20% of the data. The proposed method achieved a classification of accuracy of 99.25%, precision of 99.30%, recall of 99.10, F1 score of 99.19%, Intersection Over Union of 84.49%, and specificity of 99.76, which showed a significantly higher performance than existing techniques. These findings emphasize the significance of glioma segmentation and classification in aiding accurate diagnosis.
Related papers
- Glioma Classification using Multi-sequence MRI and Novel Wavelets-based Feature Fusion [0.0]
Glioma, a prevalent and heterogeneous tumor originating from the glial cells, can be differentiated as Low Grade Glioma (LGG) and High Grade Glioma (HGG) according to WHO norms.
For non-invasive glioma evaluation, Magnetic Resonance Imaging (MRI) offers vital information about the morphology and location of the the tumor.
In this work, wavelet based novel fusion algorithm were implemented on multi-sequence T1, T1-contrast enhanced (T1CE), T2 and Fluid Attenuated Inversion Recovery (FLAIR) MRI images to compute the radiomics features.
arXiv Detail & Related papers (2025-02-28T04:58:41Z) - Lumbar Spine Tumor Segmentation and Localization in T2 MRI Images Using AI [2.9746083684997418]
This study introduces a novel data augmentation technique, aimed at automating spine tumor segmentation and localization through AI approaches.
A Convolutional Neural Network (CNN) architecture is employed for tumor classification. 3D vertebral segmentation and labeling techniques are used to help pinpoint the exact location of the tumors in the lumbar spine.
Results indicate a remarkable performance, with 99% accuracy for tumor segmentation, 98% accuracy for tumor classification, and 99% accuracy for tumor localization achieved with the proposed approach.
arXiv Detail & Related papers (2024-05-07T05:55:50Z) - A Deep Learning Approach for Brain Tumor Classification and Segmentation
Using a Multiscale Convolutional Neural Network [0.0]
We present a fully automatic brain tumor segmentation and classification model using a Deep Convolutional Neural Network.
Our method remarkably obtained a tumor classification accuracy of 0.973, higher than the other approaches using the same database.
arXiv Detail & Related papers (2024-02-04T17:47:03Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Neural Gas Network Image Features and Segmentation for Brain Tumor
Detection Using Magnetic Resonance Imaging Data [0.0]
This research uses the metaheuristic Firefly Algorithm (FA) for image contrast enhancement as pre-processing.
Also, tumor classification is conducted by Support Vector Machine (SVM) classification algorithms and compared with a deep learning technique.
A classification accuracy of 95.14 % and segmentation accuracy of 0.977 is achieved by the proposed method.
arXiv Detail & Related papers (2023-01-28T12:16:37Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Negligible effect of brain MRI data preprocessing for tumor segmentation [36.89606202543839]
We conduct experiments on three publicly available datasets and evaluate the effect of different preprocessing steps in deep neural networks.
Our results demonstrate that most popular standardization steps add no value to the network performance.
We suggest that image intensity normalization approaches do not contribute to model accuracy because of the reduction of signal variance with image standardization.
arXiv Detail & Related papers (2022-04-11T17:29:36Z) - A Data Augmentation Method for Fully Automatic Brain Tumor Segmentation [2.9364290037516496]
An augmentation method, calledMixup, was proposed and applied to the three dimensional U-Net architecture for brain tumor segmentation.
The experimental results show that the mean accuracy of Dice scores are 91.32%, 85.67%, and 82.20% respectively on the whole tumor, tumor core, and enhancing tumor segmentation.
arXiv Detail & Related papers (2022-02-13T15:28:49Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Multi-Scale Input Strategies for Medulloblastoma Tumor Classification
using Deep Transfer Learning [59.30734371401316]
Medulloblastoma is the most common malignant brain cancer among children.
CNN has shown promising results for MB subtype classification.
We study the impact of tile size and input strategy.
arXiv Detail & Related papers (2021-09-14T09:42:37Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z) - Region of Interest Identification for Brain Tumors in Magnetic Resonance
Images [8.75217589103206]
We propose a fast, automated method, with light computational complexity, to find the smallest bounding box around the tumor region.
This region-of-interest can be used as a preprocessing step in training networks for subregion tumor segmentation.
The proposed method is evaluated on the BraTS 2015 dataset, and the average gained DICE score is 0.73, which is an acceptable result for this application.
arXiv Detail & Related papers (2020-02-26T14:10:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.