Segment Anything Model for Medical Images?
- URL: http://arxiv.org/abs/2304.14660v7
- Date: Wed, 17 Jan 2024 14:42:40 GMT
- Title: Segment Anything Model for Medical Images?
- Authors: Yuhao Huang, Xin Yang, Lian Liu, Han Zhou, Ao Chang, Xinrui Zhou, Rusi
Chen, Junxuan Yu, Jiongquan Chen, Chaoyu Chen, Sijing Liu, Haozhe Chi, Xindi
Hu, Kejuan Yue, Lei Li, Vicente Grau, Deng-Ping Fan, Fajin Dong, Dong Ni
- Abstract summary: The Segment Anything Model (SAM) is the first foundation model for general image segmentation.
We built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks.
SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations.
- Score: 38.44750512574108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Segment Anything Model (SAM) is the first foundation model for general
image segmentation. It has achieved impressive results on various natural image
segmentation tasks. However, medical image segmentation (MIS) is more
challenging because of the complex modalities, fine anatomical structures,
uncertain and complex object boundaries, and wide-range object scales. To fully
validate SAM's performance on medical data, we collected and sorted 53
open-source datasets and built a large medical segmentation dataset with 18
modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images,
and 6033K masks. We comprehensively analyzed different models and strategies on
the so-called COSMOS 1050K dataset. Our findings mainly include the following:
1) SAM showed remarkable performance in some specific objects but was unstable,
imperfect, or even totally failed in other situations. 2) SAM with the large
ViT-H showed better overall performance than that with the small ViT-B. 3) SAM
performed better with manual hints, especially box, than the Everything mode.
4) SAM could help human annotation with high labeling quality and less time. 5)
SAM was sensitive to the randomness in the center point and tight box prompts,
and may suffer from a serious performance drop. 6) SAM performed better than
interactive methods with one or a few points, but will be outpaced as the
number of points increases. 7) SAM's performance correlated to different
factors, including boundary complexity, intensity differences, etc. 8)
Finetuning the SAM on specific medical tasks could improve its average DICE
performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. We hope that
this comprehensive report can help researchers explore the potential of SAM
applications in MIS, and guide how to appropriately use and develop SAM.
Related papers
- Segment Anything without Supervision [65.93211374889196]
We present Unsupervised SAM (UnSAM) for promptable and automatic whole-image segmentation.
UnSAM utilizes a divide-and-conquer strategy to "discover" the hierarchical structure of visual scenes.
We show that supervised SAM can also benefit from our self-supervised labels.
arXiv Detail & Related papers (2024-06-28T17:47:32Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - Stable Segment Anything Model [79.9005670886038]
The Segment Anything Model (SAM) achieves remarkable promptable segmentation given high-quality prompts.
This paper presents the first comprehensive analysis on SAM's segmentation stability across a diverse spectrum of prompt qualities.
Our solution, termed Stable-SAM, offers several advantages: 1) improved SAM's segmentation stability across a wide range of prompt qualities, while 2) retaining SAM's powerful promptable segmentation efficiency and generality.
arXiv Detail & Related papers (2023-11-27T12:51:42Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SAM-Med2D [34.82072231983896]
We introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images.
We first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets.
We fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D.
arXiv Detail & Related papers (2023-08-30T17:59:02Z) - Customized Segment Anything Model for Medical Image Segmentation [10.933449793055313]
We build upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation.
SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets.
Our trained SAMed model achieves semantic segmentation on medical images, which is on par with the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-26T19:05:34Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - Computer-Vision Benchmark Segment-Anything Model (SAM) in Medical
Images: Accuracy in 12 Datasets [1.6624933615451842]
The segment-anything model (SAM) shows promise as a benchmark model and a universal solution to segment various natural images.
SAM was tested on 12 public medical image segmentation datasets involving 7,451 subjects.
Dice overlaps from SAM were significantly lower than the five medical-image-based algorithms in all 12 medical image segmentation datasets.
arXiv Detail & Related papers (2023-04-18T22:16:49Z) - SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in
Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and
More [13.047310918166762]
We propose textbfSAM-Adapter, which incorporates domain-specific information or visual prompts into the segmentation network by using simple yet effective adapters.
We can even outperform task-specific network models and achieve state-of-the-art performance in the task we tested: camouflaged object detection.
arXiv Detail & Related papers (2023-04-18T17:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.