Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma
Segmentation in MRI Scans
- URL: http://arxiv.org/abs/2010.10612v1
- Date: Tue, 20 Oct 2020 20:42:52 GMT
- Title: Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma
Segmentation in MRI Scans
- Authors: Mohammad Hamghalam, Baiying Lei, and Tianfu Wang
- Abstract summary: We devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model.
In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch.
- Score: 22.60715394470069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structural magnetic resonance imaging (MRI) has been widely utilized for
analysis and diagnosis of brain diseases. Automatic segmentation of brain
tumors is a challenging task for computer-aided diagnosis due to low-tissue
contrast in the tumor subregions. To overcome this, we devise a novel
pixel-wise segmentation framework through a convolutional 3D to 2D MR patch
conversion model to predict class labels of the central pixel in the input
sliding patches. Precisely, we first extract 3D patches from each modality to
calibrate slices through the squeeze and excitation (SE) block. Then, the
output of the SE block is fed directly into subsequent bottleneck layers to
reduce the number of channels. Finally, the calibrated 2D slices are
concatenated to obtain multimodal features through a 2D convolutional neural
network (CNN) for prediction of the central pixel. In our architecture, both
local inter-slice and global intra-slice features are jointly exploited to
predict class label of the central voxel in a given patch through the 2D CNN
classifier. We implicitly apply all modalities through trainable parameters to
assign weights to the contributions of each sequence for segmentation.
Experimental results on the segmentation of brain tumors in multimodal MRI
scans (BraTS'19) demonstrate that our proposed method can efficiently segment
the tumor regions.
Related papers
- Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - 3D Brainformer: 3D Fusion Transformer for Brain Tumor Segmentation [6.127298607534532]
Deep learning has recently emerged to improve brain tumor segmentation.
Transformers have been leveraged to address the limitations of convolutional networks.
We propose a 3D Transformer-based segmentation approach.
arXiv Detail & Related papers (2023-04-28T02:11:29Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Med-DANet: Dynamic Architecture Network for Efficient Medical Volumetric
Segmentation [13.158995287578316]
We propose a dynamic architecture network named Med-DANet to achieve effective accuracy and efficiency trade-off.
For each slice of the input 3D MRI volume, our proposed method learns a slice-specific decision by the Decision Network.
Our proposed method achieves comparable or better results than previous state-of-the-art methods for 3D MRI brain tumor segmentation.
arXiv Detail & Related papers (2022-06-14T03:25:58Z) - CORPS: Cost-free Rigorous Pseudo-labeling based on Similarity-ranking
for Brain MRI Segmentation [3.1657395760137406]
We propose a semi-supervised segmentation framework built upon a novel atlas-based pseudo-labeling method and a 3D deep convolutional neural network (DCNN) for 3D brain MRI segmentation.
The experimental results demonstrate the superiority of the proposed framework over the baseline method both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-05-19T14:42:49Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors
in MRI Images [7.334185314342017]
We propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR)
The model extracts features at five different resolutions by utilizing shifted windows for computing self-attention.
We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase.
arXiv Detail & Related papers (2022-01-04T18:01:34Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Brain tumour segmentation using cascaded 3D densely-connected U-net [10.667165962654996]
We propose a deep-learning based method to segment a brain tumour into its subregions.
The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture.
Experimental results on the BraTS20 validation dataset demonstrate that the proposed model achieved average Dice Scores of 0.90, 0.82, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.
arXiv Detail & Related papers (2020-09-16T09:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.