SQA-SAM: Segmentation Quality Assessment for Medical Images Utilizing
the Segment Anything Model
- URL: http://arxiv.org/abs/2312.09899v1
- Date: Fri, 15 Dec 2023 15:49:53 GMT
- Title: SQA-SAM: Segmentation Quality Assessment for Medical Images Utilizing
the Segment Anything Model
- Authors: Yizhe Zhang, Shuo Wang, Tao Zhou, Qi Dou, and Danny Z. Chen
- Abstract summary: We propose a novel SQA method, called SQA-SAM, to enhance the accuracy of quality assessment for medical image segmentation.
When a medical image segmentation model (MedSeg) produces predictions for a test image, we generate visual prompts based on the predictions, and SAM is utilized to generate segmentation maps corresponding to the visual prompts.
How well MedSeg's segmentation aligns with SAM's segmentation indicates how well MedSeg's segmentation aligns with the general perception of objectness and image region partition.
- Score: 35.569906173295834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation quality assessment (SQA) plays a critical role in the deployment
of a medical image based AI system. Users need to be informed/alerted whenever
an AI system generates unreliable/incorrect predictions. With the introduction
of the Segment Anything Model (SAM), a general foundation segmentation model,
new research opportunities emerged in how one can utilize SAM for medical image
segmentation. In this paper, we propose a novel SQA method, called SQA-SAM,
which exploits SAM to enhance the accuracy of quality assessment for medical
image segmentation. When a medical image segmentation model (MedSeg) produces
predictions for a test image, we generate visual prompts based on the
predictions, and SAM is utilized to generate segmentation maps corresponding to
the visual prompts. How well MedSeg's segmentation aligns with SAM's
segmentation indicates how well MedSeg's segmentation aligns with the general
perception of objectness and image region partition. We develop a score measure
for such alignment. In experiments, we find that the generated scores exhibit
moderate to strong positive correlation (in Pearson correlation and Spearman
correlation) with Dice coefficient scores reflecting the true segmentation
quality.
Related papers
- Boosting Medical Image Classification with Segmentation Foundation Model [19.41887842350247]
The Segment Anything Model (SAM) exhibits impressive capabilities in zero-shot segmentation for natural images.
No studies have shown how to harness the power of SAM for medical image classification.
We introduce SAMAug-C, an innovative augmentation method based on SAM for augmenting classification datasets.
arXiv Detail & Related papers (2024-06-16T17:54:49Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Input Augmentation with SAM: Boosting Medical Image Segmentation with
Segmentation Foundation Model [36.015065439244495]
The Segment Anything Model (SAM) is a recently developed large model for general-purpose segmentation for computer vision tasks.
SAM was trained using 11 million images with over 1 billion masks and can produce segmentation results for a wide range of objects in natural scene images.
This paper shows that although SAM does not immediately give high-quality segmentation for medical image data, its generated masks, features, and stability scores are useful for building and training better medical image segmentation models.
arXiv Detail & Related papers (2023-04-22T07:11:53Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - SAM.MD: Zero-shot medical image segmentation capabilities of the Segment
Anything Model [1.1221592576472588]
We evaluate the zero-shot capabilities of the Segment Anything Model for medical image segmentation.
We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools.
arXiv Detail & Related papers (2023-04-10T18:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.