Attention-Enhanced Hybrid Feature Aggregation Network for 3D Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2403.09942v1
- Date: Fri, 15 Mar 2024 00:52:17 GMT
- Title: Attention-Enhanced Hybrid Feature Aggregation Network for 3D Brain Tumor Segmentation
- Authors: Ziya Ata Yazıcı, İlkay Öksüz, Hazım Kemal Ekenel,
- Abstract summary: Glioblastoma is a highly aggressive and malignant brain tumor type that requires early diagnosis and prompt intervention.
To address this challenge, Artificial Intelligence (AI)-driven approaches in healthcare have generated interest in efficiently diagnosing and evaluating brain tumors.
In our approach, we utilize a multi-scale, attention-guided and hybrid U-Net-shaped model -- GLIMS -- to perform 3D brain tumor segmentation.
- Score: 0.9897828700959131
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Glioblastoma is a highly aggressive and malignant brain tumor type that requires early diagnosis and prompt intervention. Due to its heterogeneity in appearance, developing automated detection approaches is challenging. To address this challenge, Artificial Intelligence (AI)-driven approaches in healthcare have generated interest in efficiently diagnosing and evaluating brain tumors. The Brain Tumor Segmentation Challenge (BraTS) is a platform for developing and assessing automated techniques for tumor analysis using high-quality, clinically acquired MRI data. In our approach, we utilized a multi-scale, attention-guided and hybrid U-Net-shaped model -- GLIMS -- to perform 3D brain tumor segmentation in three regions: Enhancing Tumor (ET), Tumor Core (TC), and Whole Tumor (WT). The multi-scale feature extraction provides better contextual feature aggregation in high resolutions and the Swin Transformer blocks improve the global feature extraction at deeper levels of the model. The segmentation mask generation in the decoder branch is guided by the attention-refined features gathered from the encoder branch to enhance the important attributes. Moreover, hierarchical supervision is used to train the model efficiently. Our model's performance on the validation set resulted in 92.19, 87.75, and 83.18 Dice Scores and 89.09, 84.67, and 82.15 Lesion-wise Dice Scores in WT, TC, and ET, respectively. The code is publicly available at https://github.com/yaziciz/GLIMS.
Related papers
- Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - Hybrid Multihead Attentive Unet-3D for Brain Tumor Segmentation [0.0]
Brain tumor segmentation is a critical task in medical image analysis, aiding in the diagnosis and treatment planning of brain tumor patients.
Various deep learning-based techniques have made significant progress in this field, however, they still face limitations in terms of accuracy due to the complex and variable nature of brain tumor morphology.
We propose a novel Hybrid Multihead Attentive U-Net architecture, to address the challenges in accurate brain tumor segmentation.
arXiv Detail & Related papers (2024-05-22T02:46:26Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - CTVR-EHO TDA-IPH Topological Optimized Convolutional Visual Recurrent Network for Brain Tumor Segmentation and Classification [1.2499537119440245]
We develop Topological Data Analysis based Improved Persistent Homology and Convolutional Transfer learning and Visual Recurrent learning models for brain tumor segmentation and classification.
When compared to other existing brain tumor segmentation and classification models, the proposed CTVR-EHO and TDA-IPH approaches show high accuracy (99.8%), high recall (99.23%), high precision (99.67%), and high F score (99.59%)
arXiv Detail & Related papers (2022-06-06T07:04:05Z) - Federated Learning Enables Big Data for Rare Cancer Boundary Detection [98.5549882883963]
We present findings from the largest Federated ML study to-date, involving data from 71 healthcare institutions across 6 continents.
We generate an automatic tumor boundary detector for the rare disease of glioblastoma.
We demonstrate a 33% improvement over a publicly trained model to delineate the surgically targetable tumor, and 23% improvement over the tumor's entire extent.
arXiv Detail & Related papers (2022-04-22T17:27:00Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Feature-enhanced Generation and Multi-modality Fusion based Deep Neural
Network for Brain Tumor Segmentation with Missing MR Modalities [2.867517731896504]
The main problem is that not all types of MRIs are always available in clinical exams.
We propose a novel brain tumor segmentation network in the case of missing one or more modalities.
The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network.
arXiv Detail & Related papers (2021-11-08T10:59:40Z) - Triplet Contrastive Learning for Brain Tumor Classification [99.07846518148494]
We present a novel approach of directly learning deep embeddings for brain tumor types, which can be used for downstream tasks such as classification.
We evaluate our method on an extensive brain tumor dataset which consists of 27 different tumor classes, out of which 13 are defined as rare.
arXiv Detail & Related papers (2021-08-08T11:26:34Z) - 3D AGSE-VNet: An Automatic Brain Tumor MRI Data Segmentation Framework [3.0261170901794308]
Glioma is the most common brain malignant tumor, with a high morbidity rate and a mortality rate of more than three percent.
The main method of acquiring brain tumors in the clinic is MRI of brain tumor regions from multi-modal MRI scan images.
We propose an automatic brain tumor MRI data segmentation framework which is called AGSE-VNet.
arXiv Detail & Related papers (2021-07-26T09:04:59Z) - HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation [17.756591105686]
This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
arXiv Detail & Related papers (2020-12-12T09:09:04Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.