Self-calibrated convolution towards glioma segmentation
- URL: http://arxiv.org/abs/2402.05218v1
- Date: Wed, 7 Feb 2024 19:51:13 GMT
- Title: Self-calibrated convolution towards glioma segmentation
- Authors: Felipe C. R. Salvagnini and Gerson O. Barbosa and Alexandre X. Falcao
and Cid A. N. Santos
- Abstract summary: We evaluate self-calibrated convolutions in different parts of the nnU-Net network to demonstrate that self-calibrated modules in skip connections can significantly improve the enhanced-tumor and tumor-core segmentation accuracy.
- Score: 45.74830585715129
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate brain tumor segmentation in the early stages of the disease is
crucial for the treatment's effectiveness, avoiding exhaustive visual
inspection of a qualified specialist on 3D MR brain images of multiple
protocols (e.g., T1, T2, T2-FLAIR, T1-Gd). Several networks exist for Glioma
segmentation, being nnU-Net one of the best. In this work, we evaluate
self-calibrated convolutions in different parts of the nnU-Net network to
demonstrate that self-calibrated modules in skip connections can significantly
improve the enhanced-tumor and tumor-core segmentation accuracy while
preserving the wholetumor segmentation accuracy.
Related papers
- MBDRes-U-Net: Multi-Scale Lightweight Brain Tumor Segmentation Network [0.0]
This study proposes the MBDRes-U-Net model using the three-dimensional (3D) U-Net framework, which integrates multibranch residual blocks and fused attention into the model.
The computational burden of the model is reduced by the branch strategy, which effectively uses the rich local features in multimodal images.
arXiv Detail & Related papers (2024-11-04T09:03:43Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - 3D PETCT Tumor Lesion Segmentation via GCN Refinement [4.929126432666667]
We propose a post-processing method based on a graph convolutional neural network (GCN) to refine inaccurate segmentation parts.
We perform tumor segmentation experiments on the PET/CT dataset in the MICCIA2022 autoPET challenge.
The experimental results show that the false positive rate is effectively reduced with nnUNet-GCN refinement.
arXiv Detail & Related papers (2023-02-24T10:52:08Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - Feature-enhanced Generation and Multi-modality Fusion based Deep Neural
Network for Brain Tumor Segmentation with Missing MR Modalities [2.867517731896504]
The main problem is that not all types of MRIs are always available in clinical exams.
We propose a novel brain tumor segmentation network in the case of missing one or more modalities.
The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network.
arXiv Detail & Related papers (2021-11-08T10:59:40Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Brain tumour segmentation using cascaded 3D densely-connected U-net [10.667165962654996]
We propose a deep-learning based method to segment a brain tumour into its subregions.
The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture.
Experimental results on the BraTS20 validation dataset demonstrate that the proposed model achieved average Dice Scores of 0.90, 0.82, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.
arXiv Detail & Related papers (2020-09-16T09:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.