nnFormer: Interleaved Transformer for Volumetric Segmentation
- URL: http://arxiv.org/abs/2109.03201v3
- Date: Thu, 9 Sep 2021 05:09:27 GMT
- Title: nnFormer: Interleaved Transformer for Volumetric Segmentation
- Authors: Hong-Yu Zhou, Jiansen Guo, Yinghao Zhang, Lequan Yu, Liansheng Wang,
Yizhou Yu
- Abstract summary: We introduce nnFormer, a powerful segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution.
nnFormer achieves tremendous improvements over previous transformer-based methods on two commonly used datasets Synapse and ACDC.
- Score: 50.10441845967601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformers, the default model of choices in natural language processing,
have drawn scant attention from the medical imaging community. Given the
ability to exploit long-term dependencies, transformers are promising to help
atypical convolutional neural networks (convnets) to overcome its inherent
shortcomings of spatial inductive bias. However, most of recently proposed
transformer-based segmentation approaches simply treated transformers as
assisted modules to help encode global context into convolutional
representations without investigating how to optimally combine self-attention
(i.e., the core of transformers) with convolution. To address this issue, in
this paper, we introduce nnFormer (i.e., Not-aNother transFormer), a powerful
segmentation model with an interleaved architecture based on empirical
combination of self-attention and convolution. In practice, nnFormer learns
volumetric representations from 3D local volumes. Compared to the naive
voxel-level self-attention implementation, such volume-based operations help to
reduce the computational complexity by approximate 98% and 99.5% on Synapse and
ACDC datasets, respectively. In comparison to prior-art network configurations,
nnFormer achieves tremendous improvements over previous transformer-based
methods on two commonly used datasets Synapse and ACDC. For instance, nnFormer
outperforms Swin-UNet by over 7 percents on Synapse. Even when compared to
nnUNet, currently the best performing fully-convolutional medical segmentation
network, nnFormer still provides slightly better performance on Synapse and
ACDC.
Related papers
- ParaTransCNN: Parallelized TransCNN Encoder for Medical Image
Segmentation [7.955518153976858]
We propose an advanced 2D feature extraction method by combining the convolutional neural network and Transformer architectures.
Our method is shown with better segmentation accuracy, especially on small organs.
arXiv Detail & Related papers (2024-01-27T05:58:36Z) - MS-Twins: Multi-Scale Deep Self-Attention Networks for Medical Image Segmentation [6.6467547151592505]
This paper proposes MS-Twins (Multi-Scale Twins) as a powerful segmentation model on account of the bond of self-attention and convolution.
Compared with the existing network structure, MS-Twins has made progress on the previous method based on the transformer of two in common use data sets, Synapse and ACDC.
arXiv Detail & Related papers (2023-12-12T10:04:11Z) - ConvFormer: Plug-and-Play CNN-Style Transformers for Improving Medical
Image Segmentation [10.727162449071155]
We build CNN-style Transformers (ConvFormer) to promote better attention convergence and thus better segmentation performance.
In contrast to positional embedding and tokenization, ConvFormer adopts 2D convolution and max-pooling for both position information preservation and feature size reduction.
arXiv Detail & Related papers (2023-09-09T02:18:17Z) - HiFormer: Hierarchical Multi-scale Representations Using Transformers
for Medical Image Segmentation [3.478921293603811]
HiFormer is a novel method that efficiently bridges a CNN and a transformer for medical image segmentation.
To secure a fine fusion of global and local features, we propose a Double-Level Fusion (DLF) module in the skip connection of the encoder-decoder structure.
arXiv Detail & Related papers (2022-07-18T11:30:06Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Adaptive Split-Fusion Transformer [90.04885335911729]
We propose an Adaptive Split-Fusion Transformer (ASF-former) to treat convolutional and attention branches differently with adaptive weights.
Experiments on standard benchmarks, such as ImageNet-1K, show that our ASF-former outperforms its CNN, transformer counterparts, and hybrid pilots in terms of accuracy.
arXiv Detail & Related papers (2022-04-26T10:00:28Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - Finetuning Pretrained Transformers into RNNs [81.72974646901136]
Transformers have outperformed recurrent neural networks (RNNs) in natural language generation.
A linear-complexity recurrent variant has proven well suited for autoregressive generation.
This work aims to convert a pretrained transformer into its efficient recurrent counterpart.
arXiv Detail & Related papers (2021-03-24T10:50:43Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.