TransMed: Transformers Advance Multi-modal Medical Image Classification
- URL: http://arxiv.org/abs/2103.05940v1
- Date: Wed, 10 Mar 2021 08:57:53 GMT
- Title: TransMed: Transformers Advance Multi-modal Medical Image Classification
- Authors: Yin Dai and Yifan Gao
- Abstract summary: convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks.
Transformers have been applied to computer vision and achieved remarkable success in large-scale datasets.
TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images.
- Score: 4.500880052705654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past decade, convolutional neural networks (CNN) have shown very
competitive performance in medical image analysis tasks, such as disease
classification, tumor segmentation, and lesion detection. CNN has great
advantages in extracting local features of images. However, due to the locality
of convolution operation, it can not deal with long-range relationships well.
Recently, transformers have been applied to computer vision and achieved
remarkable success in large-scale datasets. Compared with natural images,
multi-modal medical images have explicit and important long-range dependencies,
and effective multi-modal fusion strategies can greatly improve the performance
of deep models. This prompts us to study transformer-based structures and apply
them to multi-modal medical images. Existing transformer-based network
architectures require large-scale datasets to achieve better performance.
However, medical imaging datasets are relatively small, which makes it
difficult to apply pure transformers to medical image analysis. Therefore, we
propose TransMed for multi-modal medical image classification. TransMed
combines the advantages of CNN and transformer to efficiently extract low-level
features of images and establish long-range dependencies between modalities. We
evaluated our model for the challenging problem of preoperative diagnosis of
parotid gland tumors, and the experimental results show the advantages of our
proposed method. We argue that the combination of CNN and transformer has
tremendous potential in a large number of medical image analysis tasks. To our
best knowledge, this is the first work to apply transformers to medical image
classification.
Related papers
- MGI: Multimodal Contrastive pre-training of Genomic and Medical Imaging [16.325123491357203]
We propose a multimodal pre-training framework that jointly incorporates genomics and medical images for downstream tasks.
We align medical images and genes using a self-supervised contrastive learning approach which combines the Mamba as a genetic encoder and the Vision Transformer (ViT) as a medical image encoder.
arXiv Detail & Related papers (2024-06-02T06:20:45Z) - Transformer-CNN Fused Architecture for Enhanced Skin Lesion Segmentation [0.0]
convolutional neural networks (CNNs) have greatly advanced medical image segmentation.
CNNs have been found to struggle with learning long-range dependencies and capturing global context.
We propose a hybrid architecture that combines the ability of transformers to capture global dependencies with the ability of CNNs to capture low-level spatial details.
arXiv Detail & Related papers (2024-01-10T18:36:14Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Transformer-Unet: Raw Image Processing with Unet [4.7944896477309555]
We propose Transformer-Unet by adding transformer modules in raw images instead of feature maps in Unet.
We form an end-to-end network and gain segmentation results better than many previous Unet based algorithms in our experiment.
arXiv Detail & Related papers (2021-09-17T09:03:10Z) - Pyramid Medical Transformer for Medical Image Segmentation [8.157373686645318]
We develop a novel method to integrate multi-scale attention and CNN feature extraction using a pyramidal network architecture, namely Pyramid Medical Transformer (PMTrans)
Experimental results on two medical image datasets, gland segmentation and MoNuSeg datasets, showed that PMTrans outperformed the latest CNN-based and transformer-based models for medical image segmentation.
arXiv Detail & Related papers (2021-04-29T23:57:20Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.