SemiSAM: Exploring SAM for Enhancing Semi-Supervised Medical Image
Segmentation with Extremely Limited Annotations
- URL: http://arxiv.org/abs/2312.06316v1
- Date: Mon, 11 Dec 2023 12:03:30 GMT
- Title: SemiSAM: Exploring SAM for Enhancing Semi-Supervised Medical Image
Segmentation with Extremely Limited Annotations
- Authors: Yichi Zhang, Yuan Cheng, Yuan Qi
- Abstract summary: Semi-supervised learning has attracted much attention due to its less dependence on acquiring abundant annotations from experts.
We propose a strategy to explore the usage of the Segment Anything Model (SAM) for enhancing semi-supervised medical image segmentation.
- Score: 24.557401001299723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning has attracted much attention due to its less
dependence on acquiring abundant annotations from experts compared to fully
supervised methods, which is especially important for medical image
segmentation which typically requires intensive pixel/voxel-wise labeling by
domain experts. Although semi-supervised methods can improve the performance by
utilizing unlabeled data, there are still gaps between fully supervised methods
under extremely limited annotation scenarios. In this paper, we propose a
simple yet efficient strategy to explore the usage of the Segment Anything
Model (SAM) for enhancing semi-supervised medical image segmentation.
Concretely, the segmentation model trained with domain knowledge provides
information for localization and generating input prompts to the SAM. Then the
generated pseudo-labels of SAM are utilized as additional supervision to assist
in the learning procedure of the semi-supervised framework. Experimental
results demonstrate that SAM's assistance significantly enhances the
performance of existing semi-supervised frameworks, especially when only one or
a few labeled images are available.
Related papers
- Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Segment Anything Model-guided Collaborative Learning Network for
Scribble-supervised Polyp Segmentation [45.15517909664628]
Polyp segmentation plays a vital role in accurately locating polyps at an early stage.
pixel-wise annotation for polyp images by physicians during the diagnosis is both time-consuming and expensive.
We propose a novel SAM-guided Collaborative Learning Network (SAM-CLNet) for scribble-supervised polyp segmentation.
arXiv Detail & Related papers (2023-12-01T03:07:13Z) - Guided Prompting in SAM for Weakly Supervised Cell Segmentation in
Histopathological Images [27.14641973632063]
This paper focuses on using weak supervision -- annotation from related tasks -- to induce a segmenter.
Recent foundation models, such as Segment Anything (SAM), can use prompts to leverage additional supervision during inference.
All SAM-based solutions hugely outperform existing weakly supervised image segmentation models, obtaining 9-15 pt Dice gains.
arXiv Detail & Related papers (2023-11-29T11:18:48Z) - Self-guided Few-shot Semantic Segmentation for Remote Sensing Imagery
Based on Large Vision Models [14.292149307183967]
This research introduces a structured framework designed for the automation of few-shot semantic segmentation.
It utilizes the SAM model and facilitates a more efficient generation of semantically discernible segmentation outcomes.
Central to our methodology is a novel automatic prompt learning approach, leveraging prior guided masks to produce coarse pixel-wise prompts for SAM.
arXiv Detail & Related papers (2023-11-22T07:07:55Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Every Annotation Counts: Multi-label Deep Supervision for Medical Image
Segmentation [85.0078917060652]
We propose a semi-weakly supervised segmentation algorithm to overcome this barrier.
Our approach is based on a new formulation of deep supervision and student-teacher model.
With our novel training regime for segmentation that flexibly makes use of images that are either fully labeled, marked with bounding boxes, just global labels, or not at all, we are able to cut the requirement for expensive labels by 94.22%.
arXiv Detail & Related papers (2021-04-27T14:51:19Z) - Semi-supervised Active Learning for Instance Segmentation via Scoring
Predictions [25.408505612498423]
We propose a novel and principled semi-supervised active learning framework for instance segmentation.
Specifically, we present an uncertainty sampling strategy named Triplet Scoring Predictions (TSP) to explicitly incorporate samples ranking clues from classes, bounding boxes and masks.
Results on medical images datasets demonstrate that the proposed method results in the embodiment of knowledge from available data in a meaningful way.
arXiv Detail & Related papers (2020-12-09T02:36:52Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - A Teacher-Student Framework for Semi-supervised Medical Image
Segmentation From Mixed Supervision [62.4773770041279]
We develop a semi-supervised learning framework based on a teacher-student fashion for organ and lesion segmentation.
We show our model is robust to the quality of bounding box and achieves comparable performance compared with full-supervised learning methods.
arXiv Detail & Related papers (2020-10-23T07:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.