CMUNeXt: An Efficient Medical Image Segmentation Network based on Large
Kernel and Skip Fusion
- URL: http://arxiv.org/abs/2308.01239v2
- Date: Thu, 3 Aug 2023 02:05:44 GMT
- Title: CMUNeXt: An Efficient Medical Image Segmentation Network based on Large
Kernel and Skip Fusion
- Authors: Fenghe Tang, Jianrui Ding, Lingtao Wang, Chunping Ning, S. Kevin Zhou
- Abstract summary: CMUNeXt is an efficient fully convolutional lightweight medical image segmentation network.
It enables fast and accurate auxiliary diagnosis in real scene scenarios.
- Score: 11.434576556863934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The U-shaped architecture has emerged as a crucial paradigm in the design of
medical image segmentation networks. However, due to the inherent local
limitations of convolution, a fully convolutional segmentation network with
U-shaped architecture struggles to effectively extract global context
information, which is vital for the precise localization of lesions. While
hybrid architectures combining CNNs and Transformers can address these issues,
their application in real medical scenarios is limited due to the computational
resource constraints imposed by the environment and edge devices. In addition,
the convolutional inductive bias in lightweight networks adeptly fits the
scarce medical data, which is lacking in the Transformer based network. In
order to extract global context information while taking advantage of the
inductive bias, we propose CMUNeXt, an efficient fully convolutional
lightweight medical image segmentation network, which enables fast and accurate
auxiliary diagnosis in real scene scenarios. CMUNeXt leverages large kernel and
inverted bottleneck design to thoroughly mix distant spatial and location
information, efficiently extracting global context information. We also
introduce the Skip-Fusion block, designed to enable smooth skip-connections and
ensure ample feature fusion. Experimental results on multiple medical image
datasets demonstrate that CMUNeXt outperforms existing heavyweight and
lightweight medical image segmentation networks in terms of segmentation
performance, while offering a faster inference speed, lighter weights, and a
reduced computational cost. The code is available at
https://github.com/FengheTan9/CMUNeXt.
Related papers
- EViT-Unet: U-Net Like Efficient Vision Transformer for Medical Image Segmentation on Mobile and Edge Devices [5.307205032859535]
We propose EViT-UNet, an efficient ViT-based segmentation network that reduces computational complexity while maintaining accuracy.
EViT-UNet is built on a U-shaped architecture, comprising an encoder, decoder, bottleneck layer, and skip connections.
Experimental results demonstrate that EViT-UNet achieves high accuracy in medical image segmentation while significantly reducing computational complexity.
arXiv Detail & Related papers (2024-10-19T08:42:53Z) - TransUKAN:Computing-Efficient Hybrid KAN-Transformer for Enhanced Medical Image Segmentation [5.280523424712006]
U-Net is currently the most widely used architecture for medical image segmentation.
We have improved the KAN to reduce memory usage and computational load.
This approach enhances the model's capability to capture nonlinear relationships.
arXiv Detail & Related papers (2024-09-23T02:52:49Z) - BEFUnet: A Hybrid CNN-Transformer Architecture for Precise Medical Image
Segmentation [0.0]
This paper proposes an innovative U-shaped network called BEFUnet, which enhances the fusion of body and edge information for precise medical image segmentation.
The BEFUnet comprises three main modules, including a novel Local Cross-Attention Feature (LCAF) fusion module, a novel Double-Level Fusion (DLF) module, and dual-branch encoder.
The LCAF module efficiently fuses edge and body features by selectively performing local cross-attention on features that are spatially close between the two modalities.
arXiv Detail & Related papers (2024-02-13T21:03:36Z) - Leveraging Frequency Domain Learning in 3D Vessel Segmentation [50.54833091336862]
In this study, we leverage Fourier domain learning as a substitute for multi-scale convolutional kernels in 3D hierarchical segmentation models.
We show that our novel network achieves remarkable dice performance (84.37% on ASACA500 and 80.32% on ImageCAS) in tubular vessel segmentation tasks.
arXiv Detail & Related papers (2024-01-11T19:07:58Z) - MobileUtr: Revisiting the relationship between light-weight CNN and
Transformer for efficient medical image segmentation [25.056401513163493]
This work revisits the relationship between CNNs and Transformers in lightweight universal networks for medical image segmentation.
In order to leverage the inductive bias inherent in CNNs, we abstract a Transformer-like lightweight CNNs block (ConvUtr) as the patch embeddings of ViTs.
We build an efficient medical image segmentation model (MobileUtr) based on CNN and Transformer.
arXiv Detail & Related papers (2023-12-04T09:04:05Z) - Cross-receptive Focused Inference Network for Lightweight Image
Super-Resolution [64.25751738088015]
Transformer-based methods have shown impressive performance in single image super-resolution (SISR) tasks.
Transformers that need to incorporate contextual information to extract features dynamically are neglected.
We propose a lightweight Cross-receptive Focused Inference Network (CFIN) that consists of a cascade of CT Blocks mixed with CNN and Transformer.
arXiv Detail & Related papers (2022-07-06T16:32:29Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Efficient Medical Image Segmentation Based on Knowledge Distillation [30.857487609003197]
We propose an efficient architecture by distilling knowledge from well-trained medical image segmentation networks to train another lightweight network.
We also devise a novel distillation module tailored for medical image segmentation to transfer semantic region information from teacher to student network.
We demonstrate that a lightweight network distilled by our method has non-negligible value in the scenario which requires relatively high operating speed and low storage usage.
arXiv Detail & Related papers (2021-08-23T07:41:10Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.