SeUNet-Trans: A Simple yet Effective UNet-Transformer Model for Medical
Image Segmentation
- URL: http://arxiv.org/abs/2310.09998v3
- Date: Fri, 10 Nov 2023 15:01:01 GMT
- Title: SeUNet-Trans: A Simple yet Effective UNet-Transformer Model for Medical
Image Segmentation
- Authors: Tan-Hanh Pham, Xianqi Li, Kim-Doang Nguyen
- Abstract summary: We propose a simple yet effective UNet-Transformer (seUNet-Trans) model for medical image segmentation.
In our approach, the UNet model is designed as a feature extractor to generate multiple feature maps from the input images.
By leveraging the UNet architecture and the self-attention mechanism, our model not only retains the preservation of both local and global context information but also is capable of capturing long-range dependencies between input elements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated medical image segmentation is becoming increasingly crucial to
modern clinical practice, driven by the growing demand for precise diagnosis,
the push towards personalized treatment plans, and the advancements in machine
learning algorithms, especially the incorporation of deep learning methods.
While convolutional neural networks (CNN) have been prevalent among these
methods, the remarkable potential of Transformer-based models for computer
vision tasks is gaining more acknowledgment. To harness the advantages of both
CNN-based and Transformer-based models, we propose a simple yet effective
UNet-Transformer (seUNet-Trans) model for medical image segmentation. In our
approach, the UNet model is designed as a feature extractor to generate
multiple feature maps from the input images, then the maps are propagated into
a bridge layer, which is introduced to sequentially connect the UNet and the
Transformer. In this stage, we approach the pixel-level embedding technique
without position embedding vectors, aiming to make the model more efficient.
Moreover, we apply spatial-reduction attention in the Transformer to reduce the
computational/memory overhead. By leveraging the UNet architecture and the
self-attention mechanism, our model not only retains the preservation of both
local and global context information but also is capable of capturing
long-range dependencies between input elements. The proposed model is
extensively experimented on seven medical image segmentation datasets including
polyp segmentation to demonstrate its efficacy. Comparison with several
state-of-the-art segmentation models on these datasets shows the superior
performance of our proposed seUNet-Trans network.
Related papers
- TransResNet: Integrating the Strengths of ViTs and CNNs for High Resolution Medical Image Segmentation via Feature Grafting [6.987177704136503]
High-resolution images are preferable in medical imaging domain as they significantly improve the diagnostic capability of the underlying method.
Most of the existing deep learning-based techniques for medical image segmentation are optimized for input images having small spatial dimensions and perform poorly on high-resolution images.
We propose a parallel-in-branch architecture called TransResNet, which incorporates Transformer and CNN in a parallel manner to extract features from multi-resolution images independently.
arXiv Detail & Related papers (2024-10-01T18:22:34Z) - TransUKAN:Computing-Efficient Hybrid KAN-Transformer for Enhanced Medical Image Segmentation [5.280523424712006]
U-Net is currently the most widely used architecture for medical image segmentation.
We have improved the KAN to reduce memory usage and computational load.
This approach enhances the model's capability to capture nonlinear relationships.
arXiv Detail & Related papers (2024-09-23T02:52:49Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - DA-TransUNet: Integrating Spatial and Channel Dual Attention with
Transformer U-Net for Medical Image Segmentation [5.5582646801199225]
This study proposes a novel deep medical image segmentation framework, called DA-TransUNet.
It aims to integrate the Transformer and dual attention block(DA-Block) into the traditional U-shaped architecture.
Unlike earlier transformer-based U-net models, DA-TransUNet utilizes Transformers and DA-Block to integrate not only global and local features, but also image-specific positional and channel features.
arXiv Detail & Related papers (2023-10-19T08:25:03Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Class-Aware Generative Adversarial Transformers for Medical Image
Segmentation [39.14169989603906]
We present CA-GANformer, a novel type of generative adversarial transformers, for medical image segmentation.
First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations.
We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures.
arXiv Detail & Related papers (2022-01-26T03:50:02Z) - Less is More: Pay Less Attention in Vision Transformers [61.05787583247392]
Less attention vIsion Transformer builds upon the fact that convolutions, fully-connected layers, and self-attentions have almost equivalent mathematical expressions for processing image patch sequences.
The proposed LIT achieves promising performance on image recognition tasks, including image classification, object detection and instance segmentation.
arXiv Detail & Related papers (2021-05-29T05:26:07Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.