UGformer for Robust Left Atrium and Scar Segmentation Across Scanners
- URL: http://arxiv.org/abs/2210.05151v1
- Date: Tue, 11 Oct 2022 05:11:11 GMT
- Title: UGformer for Robust Left Atrium and Scar Segmentation Across Scanners
- Authors: Tianyi Liu, Size Hou, Jiayuan Zhu, Zilong Zhao and Haochuan Jiang
- Abstract summary: We present a novel framework for medical image segmentation, namely, UGformer.
It unifies novel transformer blocks, GCN bridges, and convolution decoders originating from U-Net to predict left atriums (LAs) and LA scars.
The proposed UGformer model exhibits outstanding ability to segment the left atrium and scar on the LAScarQS 2022 dataset.
- Score: 12.848774186557403
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Thanks to the capacity for long-range dependencies and robustness to
irregular shapes, vision transformers and deformable convolutions are emerging
as powerful vision techniques of segmentation.Meanwhile, Graph Convolution
Networks (GCN) optimize local features based on global topological relationship
modeling. Particularly, they have been proved to be effective in addressing
issues in medical imaging segmentation tasks including multi-domain
generalization for low-quality images. In this paper, we present a novel,
effective, and robust framework for medical image segmentation, namely,
UGformer. It unifies novel transformer blocks, GCN bridges, and convolution
decoders originating from U-Net to predict left atriums (LAs) and LA scars. We
have identified two appealing findings of the proposed UGformer: 1). an
enhanced transformer module with deformable convolutions to improve the
blending of the transformer information with convolutional information and help
predict irregular LAs and scar shapes. 2). Using a bridge incorporating GCN to
further overcome the difficulty of capturing condition inconsistency across
different Magnetic Resonance Images scanners with various inconsistent domain
information. The proposed UGformer model exhibits outstanding ability to
segment the left atrium and scar on the LAScarQS 2022 dataset, outperforming
several recent state-of-the-arts.
Related papers
- Forgery-aware Adaptive Transformer for Generalizable Synthetic Image
Detection [106.39544368711427]
We study the problem of generalizable synthetic image detection, aiming to detect forgery images from diverse generative methods.
We present a novel forgery-aware adaptive transformer approach, namely FatFormer.
Our approach tuned on 4-class ProGAN data attains an average of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen diffusion models with 95% accuracy.
arXiv Detail & Related papers (2023-12-27T17:36:32Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Towards Optimal Patch Size in Vision Transformers for Tumor Segmentation [2.4540404783565433]
Detection of tumors in metastatic colorectal cancer (mCRC) plays an essential role in the early diagnosis and treatment of liver cancer.
Deep learning models backboned by fully convolutional neural networks (FCNNs) have become the dominant model for segmenting 3D computerized tomography (CT) scans.
Vision transformers have been introduced to solve FCNN's locality of receptive fields.
This paper proposes a technique to select the vision transformer's optimal input multi-resolution image patch size based on the average volume size of metastasis lesions.
arXiv Detail & Related papers (2023-08-31T09:57:27Z) - AlignTransformer: Hierarchical Alignment of Visual Regions and Disease
Tags for Medical Report Generation [50.21065317817769]
We propose an AlignTransformer framework, which includes the Align Hierarchical Attention (AHA) and the Multi-Grained Transformer (MGT) modules.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that the AlignTransformer can achieve results competitive with state-of-the-art methods on the two datasets.
arXiv Detail & Related papers (2022-03-18T13:43:53Z) - Class-Aware Generative Adversarial Transformers for Medical Image
Segmentation [39.14169989603906]
We present CA-GANformer, a novel type of generative adversarial transformers, for medical image segmentation.
First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations.
We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures.
arXiv Detail & Related papers (2022-01-26T03:50:02Z) - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors
in MRI Images [7.334185314342017]
We propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR)
The model extracts features at five different resolutions by utilizing shifted windows for computing self-attention.
We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase.
arXiv Detail & Related papers (2022-01-04T18:01:34Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - TransMorph: Transformer for unsupervised medical image registration [5.911344579346077]
We present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration.
The proposed models are extensively validated against a variety of existing registration methods and Transformer architectures.
arXiv Detail & Related papers (2021-11-19T23:37:39Z) - MISSFormer: An Effective Medical Image Segmentation Transformer [3.441872541209065]
CNN-based methods have achieved impressive results in medical image segmentation.
Transformer-based methods are popular in vision tasks recently because of its capacity of long-range dependencies.
We present MISSFormer, an effective and powerful Medical Image tranSFormer.
arXiv Detail & Related papers (2021-09-15T08:56:00Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.