TransMorph: Transformer for unsupervised medical image registration
- URL: http://arxiv.org/abs/2111.10480v2
- Date: Tue, 23 Nov 2021 02:56:26 GMT
- Title: TransMorph: Transformer for unsupervised medical image registration
- Authors: Junyu Chen, Yong Du, Yufan He, William P. Segars, Ye Li, Eric C. Frey
- Abstract summary: We present TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image registration.
The proposed models are extensively validated against a variety of existing registration methods and Transformer architectures.
- Score: 5.911344579346077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last decade, convolutional neural networks (ConvNets) have dominated
the field of medical image analysis. However, it is found that the performances
of ConvNets may still be limited by their inability to model long-range spatial
relations between voxels in an image. Numerous vision Transformers have been
proposed recently to address the shortcomings of ConvNets, demonstrating
state-of-the-art performances in many medical imaging applications.
Transformers may be a strong candidate for image registration because their
self-attention mechanism enables a more precise comprehension of the spatial
correspondence between moving and fixed images. In this paper, we present
TransMorph, a hybrid Transformer-ConvNet model for volumetric medical image
registration. We also introduce three variants of TransMorph, with two
diffeomorphic variants ensuring the topology-preserving deformations and a
Bayesian variant producing a well-calibrated registration uncertainty estimate.
The proposed models are extensively validated against a variety of existing
registration methods and Transformer architectures using volumetric medical
images from two applications: inter-patient brain MRI registration and
phantom-to-CT registration. Qualitative and quantitative results demonstrate
that TransMorph and its variants lead to a substantial performance improvement
over the baseline methods, demonstrating the effectiveness of Transformers for
medical image registration.
Related papers
- SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective
Deformable Registration of 3D Medical Images [0.7734726150561088]
We present MOREA: the first evolutionary algorithm-based approach to deformable registration of 3D images capable of tackling large deformations.
MOREA includes a 3D biomechanical mesh model for physical plausibility and is fully GPU-accelerated.
We compare MOREA to two state-of-the-art approaches on abdominal CT scans of 4 cervical cancer patients, with the latter two approaches configured for the best results per patient.
arXiv Detail & Related papers (2023-03-08T20:26:55Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z) - UGformer for Robust Left Atrium and Scar Segmentation Across Scanners [12.848774186557403]
We present a novel framework for medical image segmentation, namely, UGformer.
It unifies novel transformer blocks, GCN bridges, and convolution decoders originating from U-Net to predict left atriums (LAs) and LA scars.
The proposed UGformer model exhibits outstanding ability to segment the left atrium and scar on the LAScarQS 2022 dataset.
arXiv Detail & Related papers (2022-10-11T05:11:11Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Class-Aware Generative Adversarial Transformers for Medical Image
Segmentation [39.14169989603906]
We present CA-GANformer, a novel type of generative adversarial transformers, for medical image segmentation.
First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations.
We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures.
arXiv Detail & Related papers (2022-01-26T03:50:02Z) - Pyramid Medical Transformer for Medical Image Segmentation [8.157373686645318]
We develop a novel method to integrate multi-scale attention and CNN feature extraction using a pyramidal network architecture, namely Pyramid Medical Transformer (PMTrans)
Experimental results on two medical image datasets, gland segmentation and MoNuSeg datasets, showed that PMTrans outperformed the latest CNN-based and transformer-based models for medical image segmentation.
arXiv Detail & Related papers (2021-04-29T23:57:20Z) - TransMed: Transformers Advance Multi-modal Medical Image Classification [4.500880052705654]
convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks.
Transformers have been applied to computer vision and achieved remarkable success in large-scale datasets.
TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images.
arXiv Detail & Related papers (2021-03-10T08:57:53Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.