SAM Carries the Burden: A Semi-Supervised Approach Refining Pseudo Labels for Medical Segmentation
- URL: http://arxiv.org/abs/2411.12602v1
- Date: Tue, 19 Nov 2024 16:06:21 GMT
- Title: SAM Carries the Burden: A Semi-Supervised Approach Refining Pseudo Labels for Medical Segmentation
- Authors: Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich,
- Abstract summary: We leverage Segment Anything Model's abstract object understanding for medical image segmentation to provide pseudo labels for semi-supervised learning.
Our approach refines initial segmentations that are derived from a limited amount of annotated data.
Our method outperforms intensity-based post-processing methods.
- Score: 1.342749532731493
- License:
- Abstract: Semantic segmentation is a crucial task in medical imaging. Although supervised learning techniques have proven to be effective in performing this task, they heavily depend on large amounts of annotated training data. The recently introduced Segment Anything Model (SAM) enables prompt-based segmentation and offers zero-shot generalization to unfamiliar objects. In our work, we leverage SAM's abstract object understanding for medical image segmentation to provide pseudo labels for semi-supervised learning, thereby mitigating the need for extensive annotated training data. Our approach refines initial segmentations that are derived from a limited amount of annotated data (comprising up to 43 cases) by extracting bounding boxes and seed points as prompts forwarded to SAM. Thus, it enables the generation of dense segmentation masks as pseudo labels for unlabelled data. The results show that training with our pseudo labels yields an improvement in Dice score from $74.29\,\%$ to $84.17\,\%$ and from $66.63\,\%$ to $74.87\,\%$ for the segmentation of bones of the paediatric wrist and teeth in dental radiographs, respectively. As a result, our method outperforms intensity-based post-processing methods, state-of-the-art supervised learning for segmentation (nnU-Net), and the semi-supervised mean teacher approach. Our Code is available on GitHub.
Related papers
- SP${ }^3$ : Superpixel-propagated pseudo-label learning for weakly semi-supervised medical image segmentation [10.127428696255848]
SuperPixel-Propagated Pseudo-label learning method is proposed to handle the inadequate supervisory information challenge in weakly semi-supervised segmentation.
Our method achieves state-of-the-art performance on both tumor and organ segmentation datasets under the WSSS setting.
arXiv Detail & Related papers (2024-11-18T15:14:36Z) - Medical Image Segmentation with SAM-generated Annotations [12.432602118806573]
We evaluate the performance of the Segment Anything Model (SAM) as an annotation tool for medical data.
We generate so-called "pseudo labels" on the Medical Decathlon (MSD) computed tomography (CT) tasks.
The pseudo labels are then used in place of ground truth labels to train a UNet model in a weakly-supervised manner.
arXiv Detail & Related papers (2024-09-30T12:43:20Z) - Guidelines for Cerebrovascular Segmentation: Managing Imperfect Annotations in the context of Semi-Supervised Learning [3.231698506153459]
Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data.
Such labels are typically highly time-consuming, error-prone and expensive to produce.
Semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled.
arXiv Detail & Related papers (2024-04-02T09:31:06Z) - Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image
Segmentation [17.69933345468061]
scarcity has become a major obstacle for training powerful deep-learning models for medical image segmentation.
We introduce a textbfVersatile textbfSemi-supervised framework to exploit more unlabeled data for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2023-11-20T11:35:52Z) - Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Every Annotation Counts: Multi-label Deep Supervision for Medical Image
Segmentation [85.0078917060652]
We propose a semi-weakly supervised segmentation algorithm to overcome this barrier.
Our approach is based on a new formulation of deep supervision and student-teacher model.
With our novel training regime for segmentation that flexibly makes use of images that are either fully labeled, marked with bounding boxes, just global labels, or not at all, we are able to cut the requirement for expensive labels by 94.22%.
arXiv Detail & Related papers (2021-04-27T14:51:19Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.