SkinSAM: Empowering Skin Cancer Segmentation with Segment Anything Model
- URL: http://arxiv.org/abs/2304.13973v1
- Date: Thu, 27 Apr 2023 06:42:59 GMT
- Title: SkinSAM: Empowering Skin Cancer Segmentation with Segment Anything Model
- Authors: Mingzhe Hu, Yuheng Li, Xiaofeng Yang
- Abstract summary: SkinSAM is a fine-tuned model based on the Segment Anything Model that showed outstanding segmentation performance.
The models are validated on HAM10000 dataset which includes 10015 dermatoscopic images.
- Score: 2.752682633344525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Skin cancer is a prevalent and potentially fatal disease that requires
accurate and efficient diagnosis and treatment. Although manual tracing is the
current standard in clinics, automated tools are desired to reduce human labor
and improve accuracy. However, developing such tools is challenging due to the
highly variable appearance of skin cancers and complex objects in the
background. In this paper, we present SkinSAM, a fine-tuned model based on the
Segment Anything Model that showed outstanding segmentation performance. The
models are validated on HAM10000 dataset which includes 10015 dermatoscopic
images. While larger models (ViT_L, ViT_H) performed better than the smaller
one (ViT_b), the finetuned model (ViT_b_finetuned) exhibited the greatest
improvement, with a Mean pixel accuracy of 0.945, Mean dice score of 0.8879,
and Mean IoU score of 0.7843. Among the lesion types, vascular lesions showed
the best segmentation results. Our research demonstrates the great potential of
adapting SAM to medical image segmentation tasks.
Related papers
- Enhancing Skin Lesion Diagnosis with Ensemble Learning [15.569484546674776]
This study examines the implementation of deep learning methods to assist in the diagnosis of skin lesions using the HAM10000 dataset.
To further enhance classification accuracy, we developed ensemble models employing max voting, average voting, and stacking, resulting in accuracies of 0.803, 0.82, and 0.83.
Building on the best-performing ensemble learning model, stacking, we developed our proposed model, SkinNet, which incorporates a customized architecture and fine-tuning, achieving an accuracy of 0.867 and an AUC of 0.96.
arXiv Detail & Related papers (2024-09-06T16:19:01Z) - SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images [40.4422523499489]
Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks.
We propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions.
We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images.
arXiv Detail & Related papers (2024-08-19T11:01:00Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Skin Cancer Segmentation and Classification Using Vision Transformer for
Automatic Analysis in Dermatoscopy-based Non-invasive Digital System [0.0]
This study introduces a groundbreaking approach to skin cancer classification, employing the Vision Transformer.
The Vision Transformer is a state-of-the-art deep learning architecture renowned for its success in diverse image analysis tasks.
The Segment Anything Model aids in precise segmentation of cancerous areas, attaining high IOU and Dice Coefficient.
arXiv Detail & Related papers (2024-01-09T11:22:54Z) - Certification of Deep Learning Models for Medical Image Segmentation [44.177565298565966]
We present for the first time a certified segmentation baseline for medical imaging based on randomized smoothing and diffusion models.
Our results show that leveraging the power of denoising diffusion probabilistic models helps us overcome the limits of randomized smoothing.
arXiv Detail & Related papers (2023-10-05T16:40:33Z) - Multivariate Analysis on Performance Gaps of Artificial Intelligence
Models in Screening Mammography [4.123006816939975]
Deep learning models for abnormality classification can perform well in screening mammography.
The demographic, imaging, and clinical characteristics associated with increased risk of model failure remain unclear.
We assessed model performance by subgroups defined by age, race, pathologic outcome, tissue density, and imaging characteristics.
arXiv Detail & Related papers (2023-05-08T02:28:45Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Automatic Segmentation of Head and Neck Tumor: How Powerful Transformers
Are? [0.0]
We develop a vision transformers-based method to automatically delineate H&N tumor.
We compare its results to leading convolutional neural network (CNN)-based models.
We show that the selected transformer-based model can achieve results on a par with CNN-based ones.
arXiv Detail & Related papers (2022-01-17T07:31:52Z) - DenseNet approach to segmentation and classification of dermatoscopic
skin lesions images [0.0]
This paper proposes an improved method for segmentation and classification for skin lesions using two architectures.
The combination of U-Net and DenseNet121 provides acceptable results in dermatoscopic image analysis.
cancerous and non-cancerous samples were detected in DenseNet121 network with 79.49% and 93.11% accuracy respectively.
arXiv Detail & Related papers (2021-10-09T19:12:23Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.