ViTBIS: Vision Transformer for Biomedical Image Segmentation
- URL: http://arxiv.org/abs/2201.05920v1
- Date: Sat, 15 Jan 2022 20:44:45 GMT
- Title: ViTBIS: Vision Transformer for Biomedical Image Segmentation
- Authors: Abhinav Sagar
- Abstract summary: We propose a novel network named Vision Transformer for Biomedical Image (ViTBIS)
Our network splits the input feature maps into three parts with $1times 1$, $3times 3$ and $5times 5$ convolutions in both encoder and decoder.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel network named Vision Transformer for
Biomedical Image Segmentation (ViTBIS). Our network splits the input feature
maps into three parts with $1\times 1$, $3\times 3$ and $5\times 5$
convolutions in both encoder and decoder. Concat operator is used to merge the
features before being fed to three consecutive transformer blocks with
attention mechanism embedded inside it. Skip connections are used to connect
encoder and decoder transformer blocks. Similarly, transformer blocks and multi
scale architecture is used in decoder before being linearly projected to
produce the output segmentation map. We test the performance of our network
using Synapse multi-organ segmentation dataset, Automated cardiac diagnosis
challenge dataset, Brain tumour MRI segmentation dataset and Spleen CT
segmentation dataset. Without bells and whistles, our network outperforms most
of the previous state of the art CNN and transformer based models using Dice
score and the Hausdorff distance as the evaluation metrics.
Related papers
- Rethinking Attention Gated with Hybrid Dual Pyramid Transformer-CNN for Generalized Segmentation in Medical Imaging [17.07490339960335]
We introduce a novel hybrid CNN-Transformer segmentation architecture (PAG-TransYnet) designed for efficiently building a strong CNN-Transformer encoder.
Our approach exploits attention gates within a Dual Pyramid hybrid encoder.
arXiv Detail & Related papers (2024-04-28T14:37:10Z) - CATS v2: Hybrid encoders for robust medical segmentation [12.194439938007672]
Convolutional Neural Networks (CNNs) have exhibited strong performance in medical image segmentation tasks.
However, due to the limited field of view of convolution kernel, it is hard for CNNs to fully represent global information.
We propose CATS v2 with hybrid encoders, which better leverage both local and global information.
arXiv Detail & Related papers (2023-08-11T20:21:54Z) - Cats: Complementary CNN and Transformer Encoders for Segmentation [13.288195115791758]
We propose a model with double encoders for 3D biomedical image segmentation.
We fuse the information from the convolutional encoder and the transformer, and pass it to the decoder to obtain the results.
Compared to the state-of-the-art models with and without transformers on each task, our proposed method obtains higher Dice scores across the board.
arXiv Detail & Related papers (2022-08-24T14:25:11Z) - Focused Decoding Enables 3D Anatomical Detection by Transformers [64.36530874341666]
We propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder.
Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view.
We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights.
arXiv Detail & Related papers (2022-07-21T22:17:21Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Transformer-Unet: Raw Image Processing with Unet [4.7944896477309555]
We propose Transformer-Unet by adding transformer modules in raw images instead of feature maps in Unet.
We form an end-to-end network and gain segmentation results better than many previous Unet based algorithms in our experiment.
arXiv Detail & Related papers (2021-09-17T09:03:10Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - TransBTS: Multimodal Brain Tumor Segmentation Using Transformer [9.296315610803985]
We propose a novel network named TransBTS based on the encoder-decoder structure.
To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric feature maps.
Meanwhile, the feature maps are reformed elaborately for tokens that are fed into Transformer for global feature modeling.
arXiv Detail & Related papers (2021-03-07T19:12:14Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.