Zero-shot performance of the Segment Anything Model (SAM) in 2D medical
imaging: A comprehensive evaluation and practical guidelines
- URL: http://arxiv.org/abs/2305.00109v2
- Date: Fri, 5 May 2023 19:02:03 GMT
- Title: Zero-shot performance of the Segment Anything Model (SAM) in 2D medical
imaging: A comprehensive evaluation and practical guidelines
- Authors: Christian Mattjie and Luis Vinicius de Moura and Rafaela Cappelari
Ravazio and Lucas Silveira Kupssinsk\"u and Ot\'avio Parraga and Marcelo
Mussi Delucis and Rodrigo Coelho Barros
- Abstract summary: Segment Anything Model (SAM) harnesses a massive training dataset to segment nearly any object.
Our findings reveal that SAM's zero-shot performance is not only comparable, but in certain cases, surpasses the current state-of-the-art.
We propose practical guidelines that require minimal interaction while consistently yielding robust outcomes.
- Score: 0.13854111346209866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation in medical imaging is a critical component for the diagnosis,
monitoring, and treatment of various diseases and medical conditions.
Presently, the medical segmentation landscape is dominated by numerous
specialized deep learning models, each fine-tuned for specific segmentation
tasks and image modalities. The recently-introduced Segment Anything Model
(SAM) employs the ViT neural architecture and harnesses a massive training
dataset to segment nearly any object; however, its suitability to the medical
domain has not yet been investigated. In this study, we explore the zero-shot
performance of SAM in medical imaging by implementing eight distinct prompt
strategies across six datasets from four imaging modalities, including X-ray,
ultrasound, dermatoscopy, and colonoscopy. Our findings reveal that SAM's
zero-shot performance is not only comparable to, but in certain cases,
surpasses the current state-of-the-art. Based on these results, we propose
practical guidelines that require minimal interaction while consistently
yielding robust outcomes across all assessed contexts. The source code, along
with a demonstration of the recommended guidelines, can be accessed at
https://github.com/Malta-Lab/SAM-zero-shot-in-Medical-Imaging.
Related papers
- MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.
Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.
We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - Segment anything model for head and neck tumor segmentation with CT, PET
and MRI multi-modality images [0.04924932828166548]
This study investigates the Segment Anything Model (SAM), recognized for requiring minimal human prompting.
We specifically examine MedSAM, a version of SAM fine-tuned with large-scale public medical images.
Our study demonstrates that fine-tuning SAM significantly enhances its segmentation accuracy, building upon the already effective zero-shot results.
arXiv Detail & Related papers (2024-02-27T12:26:45Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - Generalist Vision Foundation Models for Medical Imaging: A Case Study of
Segment Anything Model on Zero-Shot Medical Segmentation [5.547422331445511]
We report quantitative and qualitative zero-shot segmentation results on nine medical image segmentation benchmarks.
Our study indicates the versatility of generalist vision foundation models on medical imaging.
arXiv Detail & Related papers (2023-04-25T08:07:59Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.