MIST: Medical Image Segmentation Transformer with Convolutional
Attention Mixing (CAM) Decoder
- URL: http://arxiv.org/abs/2310.19898v1
- Date: Mon, 30 Oct 2023 18:07:57 GMT
- Title: MIST: Medical Image Segmentation Transformer with Convolutional
Attention Mixing (CAM) Decoder
- Authors: Md Motiur Rahman, Shiva Shokouhmand, Smriti Bhatt, and Miad Faezipour
- Abstract summary: We propose a Medical Image Transformer (MIST) incorporating a novel Convolutional Attention Mixing (CAM) decoder.
MIST has two parts: a pre-trained multi-axis vision transformer (MaxViT) is used as an encoder, and the encoded feature representation is passed through the CAM decoder for segmenting the images.
To enhance spatial information gain, deep and shallow convolutions are used for feature extraction and receptive field expansion.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the common and promising deep learning approaches used for medical
image segmentation is transformers, as they can capture long-range dependencies
among the pixels by utilizing self-attention. Despite being successful in
medical image segmentation, transformers face limitations in capturing local
contexts of pixels in multimodal dimensions. We propose a Medical Image
Segmentation Transformer (MIST) incorporating a novel Convolutional Attention
Mixing (CAM) decoder to address this issue. MIST has two parts: a pre-trained
multi-axis vision transformer (MaxViT) is used as an encoder, and the encoded
feature representation is passed through the CAM decoder for segmenting the
images. In the CAM decoder, an attention-mixer combining multi-head
self-attention, spatial attention, and squeeze and excitation attention modules
is introduced to capture long-range dependencies in all spatial dimensions.
Moreover, to enhance spatial information gain, deep and shallow convolutions
are used for feature extraction and receptive field expansion, respectively.
The integration of low-level and high-level features from different network
stages is enabled by skip connections, allowing MIST to suppress unnecessary
information. The experiments show that our MIST transformer with CAM decoder
outperforms the state-of-the-art models specifically designed for medical image
segmentation on the ACDC and Synapse datasets. Our results also demonstrate
that adding the CAM decoder with a hierarchical transformer improves
segmentation performance significantly. Our model with data and code is
publicly available on GitHub.
Related papers
- ASSNet: Adaptive Semantic Segmentation Network for Microtumors and Multi-Organ Segmentation [32.74195208408193]
Medical image segmentation is a crucial task in computer vision, supporting clinicians in diagnosis, treatment planning, and disease monitoring.
We propose the Adaptive Semantic Network (ASSNet), a transformer architecture that effectively integrates local and global features for precise medical image segmentation.
Tests on diverse medical image segmentation tasks, including multi-organ, liver tumor, and bladder tumor segmentation, demonstrate that ASSNet achieves state-of-the-art results.
arXiv Detail & Related papers (2024-09-12T06:25:44Z) - ParaTransCNN: Parallelized TransCNN Encoder for Medical Image
Segmentation [7.955518153976858]
We propose an advanced 2D feature extraction method by combining the convolutional neural network and Transformer architectures.
Our method is shown with better segmentation accuracy, especially on small organs.
arXiv Detail & Related papers (2024-01-27T05:58:36Z) - ConvTransSeg: A Multi-resolution Convolution-Transformer Network for
Medical Image Segmentation [14.485482467748113]
We propose a hybrid encoder-decoder segmentation model (ConvTransSeg)
It consists of a multi-layer CNN as the encoder for feature learning and the corresponding multi-level Transformer as the decoder for segmentation prediction.
Our method achieves the best performance in terms of Dice coefficient and average symmetric surface distance measures with low model complexity and memory consumption.
arXiv Detail & Related papers (2022-10-13T14:59:23Z) - Focused Decoding Enables 3D Anatomical Detection by Transformers [64.36530874341666]
We propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder.
Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view.
We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights.
arXiv Detail & Related papers (2022-07-21T22:17:21Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - XCiT: Cross-Covariance Image Transformers [73.33400159139708]
We propose a "transposed" version of self-attention that operates across feature channels rather than tokens.
The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images.
arXiv Detail & Related papers (2021-06-17T17:33:35Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - Multiscale Vision Transformers [79.76412415996892]
We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models.
We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks.
arXiv Detail & Related papers (2021-04-22T17:59:45Z) - UNETR: Transformers for 3D Medical Image Segmentation [8.59571749685388]
We introduce a novel architecture, dubbed as UNEt TRansformers (UNETR), that utilizes a pure transformer as the encoder to learn sequence representations of the input volume.
We have extensively validated the performance of our proposed model across different imaging modalities.
arXiv Detail & Related papers (2021-03-18T20:17:15Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.