MALUNet: A Multi-Attention and Light-weight UNet for Skin Lesion
Segmentation
- URL: http://arxiv.org/abs/2211.01784v1
- Date: Thu, 3 Nov 2022 13:19:22 GMT
- Title: MALUNet: A Multi-Attention and Light-weight UNet for Skin Lesion
Segmentation
- Authors: Jiacheng Ruan, Suncheng Xiang, Mingye Xie, Ting Liu and Yuzhuo Fu
- Abstract summary: We propose a light-weight model to achieve competitive performances for skin lesion segmentation at the lowest cost of parameters and computational complexity.
We combine four modules with our U-shape architecture and obtain a light-weight medical image segmentation model dubbed as MALUNet.
Compared with UNet, our model improves the mIoU and DSC metrics by 2.39% and 1.49%, respectively, with a 44x and 166x reduction in the number of parameters and computational complexity.
- Score: 13.456935850832565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, some pioneering works have preferred applying more complex modules
to improve segmentation performances. However, it is not friendly for actual
clinical environments due to limited computing resources. To address this
challenge, we propose a light-weight model to achieve competitive performances
for skin lesion segmentation at the lowest cost of parameters and computational
complexity so far. Briefly, we propose four modules: (1) DGA consists of
dilated convolution and gated attention mechanisms to extract global and local
feature information; (2) IEA, which is based on external attention to
characterize the overall datasets and enhance the connection between samples;
(3) CAB is composed of 1D convolution and fully connected layers to perform a
global and local fusion of multi-stage features to generate attention maps at
channel axis; (4) SAB, which operates on multi-stage features by a shared 2D
convolution to generate attention maps at spatial axis. We combine four modules
with our U-shape architecture and obtain a light-weight medical image
segmentation model dubbed as MALUNet. Compared with UNet, our model improves
the mIoU and DSC metrics by 2.39% and 1.49%, respectively, with a 44x and 166x
reduction in the number of parameters and computational complexity. In
addition, we conduct comparison experiments on two skin lesion segmentation
datasets (ISIC2017 and ISIC2018). Experimental results show that our model
achieves state-of-the-art in balancing the number of parameters, computational
complexity and segmentation performances. Code is available at
https://github.com/JCruan519/MALUNet.
Related papers
- EM-Net: Efficient Channel and Frequency Learning with Mamba for 3D Medical Image Segmentation [3.6813810514531085]
We introduce a novel 3D medical image segmentation model called EM-Net. Inspired by its success, we introduce a novel Mamba-based 3D medical image segmentation model called EM-Net.
Comprehensive experiments on two challenging multi-organ datasets with other state-of-the-art (SOTA) algorithms show that our method exhibits better segmentation accuracy while requiring nearly half the parameter size of SOTA models and 2x faster training speed.
arXiv Detail & Related papers (2024-09-26T09:34:33Z) - GroupMamba: Parameter-Efficient and Accurate Group Visual State Space Model [66.35608254724566]
State-space models (SSMs) have showcased effective performance in modeling long-range dependencies with subquadratic complexity.
However, pure SSM-based models still face challenges related to stability and achieving optimal performance on computer vision tasks.
Our paper addresses the challenges of scaling SSM-based models for computer vision, particularly the instability and inefficiency of large model sizes.
arXiv Detail & Related papers (2024-07-18T17:59:58Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - GCtx-UNet: Efficient Network for Medical Image Segmentation [0.2353157426758003]
GCtx-UNet is a lightweight segmentation architecture that can capture global and local image features with accuracy better than state-of-the-art approaches.
GCtx-UNet is evaluated on the Synapse multi-organ abdominal CT dataset, the ACDC cardiac MRI dataset, and several polyp segmentation datasets.
arXiv Detail & Related papers (2024-06-09T19:17:14Z) - PMFSNet: Polarized Multi-scale Feature Self-attention Network For
Lightweight Medical Image Segmentation [6.134314911212846]
Current state-of-the-art medical image segmentation methods prioritize accuracy but often at the expense of increased computational demands and larger model sizes.
We propose PMFSNet, a novel medical imaging segmentation model that balances global local feature processing while avoiding computational redundancy.
It incorporates a plug-and-play PMFS block, a multi-scale feature enhancement module based on attention mechanisms, to capture long-term dependencies.
arXiv Detail & Related papers (2024-01-15T10:26:47Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Lesion-aware Dynamic Kernel for Polyp Segmentation [49.63274623103663]
We propose a lesion-aware dynamic network (LDNet) for polyp segmentation.
It is a traditional u-shape encoder-decoder structure incorporated with a dynamic kernel generation and updating scheme.
This simple but effective scheme endows our model with powerful segmentation performance and generalization capability.
arXiv Detail & Related papers (2023-01-12T09:53:57Z) - UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation [93.88170217725805]
We propose a 3D medical image segmentation approach, named UNETR++, that offers both high-quality segmentation masks as well as efficiency in terms of parameters, compute cost, and inference speed.
The core of our design is the introduction of a novel efficient paired attention (EPA) block that efficiently learns spatial and channel-wise discriminative features.
Our evaluations on five benchmarks, Synapse, BTCV, ACDC, BRaTs, and Decathlon-Lung, reveal the effectiveness of our contributions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-12-08T18:59:57Z) - UNet#: A UNet-like Redesigning Skip Connections for Medical Image
Segmentation [13.767615201220138]
We propose a novel network structure combining dense skip connections and full-scale skip connections, named UNet-sharp (UNet#) for its shape similar to symbol #.
The proposed UNet# can aggregate feature maps of different scales in the decoder sub-network and capture fine-grained details and coarse-grained semantics from the full scale.
arXiv Detail & Related papers (2022-05-24T03:40:48Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - PAENet: A Progressive Attention-Enhanced Network for 3D to 2D Retinal
Vessel Segmentation [0.0]
3D to 2D retinal vessel segmentation is a challenging problem in Optical Coherence Tomography Angiography ( OCTA) images.
We propose a Progressive Attention-Enhanced Network (PAENet) based on attention mechanisms to extract rich feature representation.
Our proposed algorithm achieves state-of-the-art performance compared with previous methods.
arXiv Detail & Related papers (2021-08-26T10:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.