3D Brainformer: 3D Fusion Transformer for Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2304.14508v1
- Date: Fri, 28 Apr 2023 02:11:29 GMT
- Title: 3D Brainformer: 3D Fusion Transformer for Brain Tumor Segmentation
- Authors: Rui Nian, Guoyao Zhang, Yao Sui, Yuqi Qian, Qiuying Li, Mingzhang
Zhao, Jianhui Li, Ali Gholipour, and Simon K. Warfield
- Abstract summary: Deep learning has recently emerged to improve brain tumor segmentation.
Transformers have been leveraged to address the limitations of convolutional networks.
We propose a 3D Transformer-based segmentation approach.
- Score: 6.127298607534532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic resonance imaging (MRI) is critically important for brain mapping in
both scientific research and clinical studies. Precise segmentation of brain
tumors facilitates clinical diagnosis, evaluations, and surgical planning. Deep
learning has recently emerged to improve brain tumor segmentation and achieved
impressive results. Convolutional architectures are widely used to implement
those neural networks. By the nature of limited receptive fields, however,
those architectures are subject to representing long-range spatial dependencies
of the voxel intensities in MRI images. Transformers have been leveraged
recently to address the above limitations of convolutional networks.
Unfortunately, the majority of current Transformers-based methods in
segmentation are performed with 2D MRI slices, instead of 3D volumes. Moreover,
it is difficult to incorporate the structures between layers because each head
is calculated independently in the Multi-Head Self-Attention mechanism (MHSA).
In this work, we proposed a 3D Transformer-based segmentation approach. We
developed a Fusion-Head Self-Attention mechanism (FHSA) to combine each
attention head through attention logic and weight mapping, for the exploration
of the long-range spatial dependencies in 3D MRI images. We implemented a
plug-and-play self-attention module, named the Infinite Deformable Fusion
Transformer Module (IDFTM), to extract features on any deformable feature maps.
We applied our approach to the task of brain tumor segmentation, and assessed
it on the public BRATS datasets. The experimental results demonstrated that our
proposed approach achieved superior performance, in comparison to several
state-of-the-art segmentation methods.
Related papers
- Exploration of Multi-Scale Image Fusion Systems in Intelligent Medical Image Analysis [3.881664394416534]
It is necessary to perform automatic segmentation of brain tumors on MRI images.
This project intends to build an MRI algorithm based on U-Net.
arXiv Detail & Related papers (2024-05-23T04:33:12Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors
in MRI Images [7.334185314342017]
We propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR)
The model extracts features at five different resolutions by utilizing shifted windows for computing self-attention.
We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase.
arXiv Detail & Related papers (2022-01-04T18:01:34Z) - Neural Architecture Search for Gliomas Segmentation on Multimodal
Magnetic Resonance Imaging [2.66512000865131]
We propose a neural architecture search (NAS) based solution to brain tumor segmentation tasks on multimodal MRI scans.
The developed solution also integrates normalization and patching strategies tailored for brain MRI processing.
arXiv Detail & Related papers (2020-05-13T14:32:00Z) - Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs [2.4736005621421686]
Multimodal brain tumor segmentation challenge (BraTS) brings together researchers to improve automated methods for 3D MRI brain tumor segmentation.
We evaluate the method on BraTS 2019 challenge.
arXiv Detail & Related papers (2020-01-06T07:47:42Z) - Transfer Learning for Brain Tumor Segmentation [0.6408773096179187]
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery.
Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks.
In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances.
arXiv Detail & Related papers (2019-12-28T12:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.