TransBTS: Multimodal Brain Tumor Segmentation Using Transformer
- URL: http://arxiv.org/abs/2103.04430v1
- Date: Sun, 7 Mar 2021 19:12:14 GMT
- Title: TransBTS: Multimodal Brain Tumor Segmentation Using Transformer
- Authors: Wenxuan Wang, Chen Chen, Meng Ding, Jiangyun Li, Hong Yu, Sen Zha
- Abstract summary: We propose a novel network named TransBTS based on the encoder-decoder structure.
To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric feature maps.
Meanwhile, the feature maps are reformed elaborately for tokens that are fed into Transformer for global feature modeling.
- Score: 9.296315610803985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformer, which can benefit from global (long-range) information modeling
using self-attention mechanisms, has been successful in natural language
processing and 2D image classification recently. However, both local and global
features are crucial for dense prediction tasks, especially for 3D medical
image segmentation. In this paper, we for the first time exploit Transformer in
3D CNN for MRI Brain Tumor Segmentation and propose a novel network named
TransBTS based on the encoder-decoder structure. To capture the local 3D
context information, the encoder first utilizes 3D CNN to extract the
volumetric spatial feature maps. Meanwhile, the feature maps are reformed
elaborately for tokens that are fed into Transformer for global feature
modeling. The decoder leverages the features embedded by Transformer and
performs progressive upsampling to predict the detailed segmentation map.
Experimental results on the BraTS 2019 dataset show that TransBTS outperforms
state-of-the-art methods for brain tumor segmentation on 3D MRI scans. Code is
available at https://github.com/Wenxuan-1119/TransBTS
Related papers
- Focused Decoding Enables 3D Anatomical Detection by Transformers [64.36530874341666]
We propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder.
Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view.
We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights.
arXiv Detail & Related papers (2022-07-21T22:17:21Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Dynamic Linear Transformer for 3D Biomedical Image Segmentation [2.440109381823186]
Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks.
Main challenge for 3D transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism.
We propose a novel transformer architecture for 3D medical image segmentation using an encoder-decoder style architecture with linear complexity.
arXiv Detail & Related papers (2022-06-01T21:15:01Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - TransBTSV2: Wider Instead of Deeper Transformer for Medical Image
Segmentation [12.85662034471981]
We exploit Transformer in 3D CNN for 3D medical image segmentation.
We propose a novel network named TransBTSV2 based on the encoder-decoder structure.
As a hybrid CNN-Transformer architecture, TransBTSV2 can achieve accurate segmentation of medical images without any pre-training.
arXiv Detail & Related papers (2022-01-30T11:00:34Z) - ViTBIS: Vision Transformer for Biomedical Image Segmentation [0.0]
We propose a novel network named Vision Transformer for Biomedical Image (ViTBIS)
Our network splits the input feature maps into three parts with $1times 1$, $3times 3$ and $5times 5$ convolutions in both encoder and decoder.
arXiv Detail & Related papers (2022-01-15T20:44:45Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.