Investigation of Network Architecture for Multimodal Head-and-Neck Tumor
Segmentation
- URL: http://arxiv.org/abs/2212.10724v1
- Date: Wed, 21 Dec 2022 02:35:46 GMT
- Title: Investigation of Network Architecture for Multimodal Head-and-Neck Tumor
Segmentation
- Authors: Ye Li, Junyu Chen, Se-in Jang, Kuang Gong, Quanzheng Li
- Abstract summary: In this study, we analyze, two recently published Transformer-based network architectures for the task of multimodal head-and-tumor segmentation.
Our results showed that modeling long-range dependencies may be helpful in cases where large structures are present and/or large field of view is needed.
For small structures such as head-and-neck tumor, the convolution-based U-Net architecture seemed to perform well, especially when training dataset is small and computational resource is limited.
- Score: 9.441769048218955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by the recent success of Transformers for Natural Language
Processing and vision Transformer for Computer Vision, many researchers in the
medical imaging community have flocked to Transformer-based networks for
various main stream medical tasks such as classification, segmentation, and
estimation. In this study, we analyze, two recently published Transformer-based
network architectures for the task of multimodal head-and-tumor segmentation
and compare their performance to the de facto standard 3D segmentation network
- the nnU-Net. Our results showed that modeling long-range dependencies may be
helpful in cases where large structures are present and/or large field of view
is needed. However, for small structures such as head-and-neck tumor, the
convolution-based U-Net architecture seemed to perform well, especially when
training dataset is small and computational resource is limited.
Related papers
- 3D TransUNet: Advancing Medical Image Segmentation through Vision
Transformers [40.21263511313524]
Medical image segmentation plays a crucial role in advancing healthcare systems for disease diagnosis and treatment planning.
The u-shaped architecture, popularly known as U-Net, has proven highly successful for various medical image segmentation tasks.
To address these limitations, researchers have turned to Transformers, renowned for their global self-attention mechanisms.
arXiv Detail & Related papers (2023-10-11T18:07:19Z) - General-Purpose Multimodal Transformer meets Remote Sensing Semantic
Segmentation [35.100738362291416]
Multimodal AI seeks to exploit complementary data sources, particularly for complex tasks like semantic segmentation.
Recent trends in general-purpose multimodal networks have shown great potential to achieve state-of-the-art performance.
We propose a UNet-inspired module that employs 3D convolution to encode vital local information and learn cross-modal features simultaneously.
arXiv Detail & Related papers (2023-07-07T04:58:34Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - TransDeepLab: Convolution-Free Transformer-based DeepLab v3+ for Medical
Image Segmentation [11.190117191084175]
This paper proposes TransDeepLab, a novel DeepLab-like pure Transformer for medical image segmentation.
We exploit hierarchical Swin-Transformer with shifted windows to extend the DeepLabv3 and model the Atrous Spatial Pyramid Pooling (ASPP) module.
Our approach performs superior or on par with most contemporary works on an amalgamation of Vision Transformer and CNN-based methods.
arXiv Detail & Related papers (2022-08-01T09:53:53Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors
in MRI Images [7.334185314342017]
We propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR)
The model extracts features at five different resolutions by utilizing shifted windows for computing self-attention.
We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase.
arXiv Detail & Related papers (2022-01-04T18:01:34Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z) - DoDNet: Learning to segment multi-organ and tumors from multiple
partially labeled datasets [102.55303521877933]
We propose a dynamic on-demand network (DoDNet) that learns to segment multiple organs and tumors on partially labelled datasets.
DoDNet consists of a shared encoder-decoder architecture, a task encoding module, a controller for generating dynamic convolution filters, and a single but dynamic segmentation head.
arXiv Detail & Related papers (2020-11-20T04:56:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.