LSSF-Net: Lightweight Segmentation with Self-Awareness, Spatial Attention, and Focal Modulation
- URL: http://arxiv.org/abs/2409.01572v1
- Date: Tue, 3 Sep 2024 03:06:32 GMT
- Title: LSSF-Net: Lightweight Segmentation with Self-Awareness, Spatial Attention, and Focal Modulation
- Authors: Hamza Farooq, Zuhair Zafar, Ahsan Saadat, Tariq M Khan, Shahzaib Iqbal, Imran Razzak,
- Abstract summary: We propose a novel lightweight network specifically designed for skin lesion segmentation utilizing mobile devices.
Our network comprises an encoder-decoder architecture that incorporates conformer-based focal modulation attention, self-aware local and global spatial attention, and split channel-shuffle.
Empirical findings substantiate its state-of-the-art performance, notably reflected in a high Jaccard index.
- Score: 8.566930077350184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate segmentation of skin lesions within dermoscopic images plays a crucial role in the timely identification of skin cancer for computer-aided diagnosis on mobile platforms. However, varying shapes of the lesions, lack of defined edges, and the presence of obstructions such as hair strands and marker colors make this challenge more complex. \textcolor{red}Additionally, skin lesions often exhibit subtle variations in texture and color that are difficult to differentiate from surrounding healthy skin, necessitating models that can capture both fine-grained details and broader contextual information. Currently, melanoma segmentation models are commonly based on fully connected networks and U-Nets. However, these models often struggle with capturing the complex and varied characteristics of skin lesions, such as the presence of indistinct boundaries and diverse lesion appearances, which can lead to suboptimal segmentation performance.To address these challenges, we propose a novel lightweight network specifically designed for skin lesion segmentation utilizing mobile devices, featuring a minimal number of learnable parameters (only 0.8 million). This network comprises an encoder-decoder architecture that incorporates conformer-based focal modulation attention, self-aware local and global spatial attention, and split channel-shuffle. The efficacy of our model has been evaluated on four well-established benchmark datasets for skin lesion segmentation: ISIC 2016, ISIC 2017, ISIC 2018, and PH2. Empirical findings substantiate its state-of-the-art performance, notably reflected in a high Jaccard index.
Related papers
- TESL-Net: A Transformer-Enhanced CNN for Accurate Skin Lesion Segmentation [9.077654650104057]
Early detection of skin cancer relies on precise segmentation of dermoscopic images of skin lesions.
Recent methods for melanoma segmentation are U-Nets and fully connected networks (FCNs)
We introduce a novel network named TESL-Net for the segmentation of skin lesions.
arXiv Detail & Related papers (2024-08-19T03:49:48Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - IARS SegNet: Interpretable Attention Residual Skip connection SegNet for
melanoma segmentation [0.0]
IARS SegNet is an advanced segmentation framework built upon the SegNet baseline model.
Skip connections, residual convolutions, and a global attention mechanism play a pivotal role in accentuating the significance of clinically relevant regions.
This enhancement highlights critical regions, fosters better understanding, and leads to more accurate skin lesion segmentation for melanoma diagnosis.
arXiv Detail & Related papers (2023-10-31T09:04:09Z) - Unsupervised Skin Lesion Segmentation via Structural Entropy
Minimization on Multi-Scale Superpixel Graphs [59.19218582436495]
We propose an unsupervised Skin Lesion sEgmentation framework based on structural entropy and isolation forest outlier Detection, namely SLED.
Skin lesions are segmented by minimizing the structural entropy of a superpixel graph constructed from the dermoscopic image.
We characterize the consistency of healthy skin features and devise a novel multi-scale segmentation mechanism by outlier detection, which enhances the segmentation accuracy by leveraging the superpixel features from multiple scales.
arXiv Detail & Related papers (2023-09-05T02:15:51Z) - Scale-aware Super-resolution Network with Dual Affinity Learning for
Lesion Segmentation from Medical Images [50.76668288066681]
We present a scale-aware super-resolution network to adaptively segment lesions of various sizes from low-resolution medical images.
Our proposed network achieved consistent improvements compared to other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-30T14:25:55Z) - Generative Adversarial Networks based Skin Lesion Segmentation [7.9234173309439715]
We propose a novel adversarial learning-based framework called Efficient-GAN that uses an unsupervised generative network to generate accurate lesion masks.
It outperforms the current state-of-the-art skin lesion segmentation approaches with a Dice coefficient, Jaccard similarity, and Accuracy of 90.1%, 83.6%, and 94.5%, respectively.
We also design a lightweight segmentation framework (MGAN) that achieves comparable performance as EGAN but with an order of magnitude lower number of training parameters.
arXiv Detail & Related papers (2023-05-29T15:51:31Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Salient Skin Lesion Segmentation via Dilated Scale-Wise Feature Fusion
Network [28.709314434820953]
Current skin lesion segmentation approaches show poor performance in challenging circumstances.
We propose a dilated scale-wise feature fusion network based on convolution factorization.
Our proposed model consistently showcases state-of-the-art results.
arXiv Detail & Related papers (2022-05-20T16:08:37Z) - Leveraging Adaptive Color Augmentation in Convolutional Neural Networks
for Deep Skin Lesion Segmentation [0.0]
We propose an adaptive color augmentation technique to amplify data expression and model performance.
We qualitatively identify and verify the semantic structural features learned by the network for discriminating skin lesions against normal skin tissue.
arXiv Detail & Related papers (2020-10-31T00:16:23Z) - DONet: Dual Objective Networks for Skin Lesion Segmentation [77.9806410198298]
We propose a simple yet effective framework, named Dual Objective Networks (DONet), to improve the skin lesion segmentation.
Our DONet adopts two symmetric decoders to produce different predictions for approaching different objectives.
To address the challenge of large variety of lesion scales and shapes in dermoscopic images, we additionally propose a recurrent context encoding module (RCEM)
arXiv Detail & Related papers (2020-08-19T06:02:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.