Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation
with Meta-Learning
- URL: http://arxiv.org/abs/2308.16466v3
- Date: Fri, 3 Nov 2023 04:47:40 GMT
- Title: Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation
with Meta-Learning
- Authors: Yiming Zhang, Tianang Leng, Kun Han, Xiaohui Xie
- Abstract summary: We present a Self-Sampling Meta SAM framework for few-shot medical image segmentation.
The proposed method achieves significant improvements over state-of-the-art methods in few-shot segmentation.
In conclusion, we present a novel approach for rapid online adaptation in interactive image segmentation, adapting to a new organ in just 0.83 minutes.
- Score: 17.386754270460273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While the Segment Anything Model (SAM) excels in semantic segmentation for
general-purpose images, its performance significantly deteriorates when applied
to medical images, primarily attributable to insufficient representation of
medical images in its training dataset. Nonetheless, gathering comprehensive
datasets and training models that are universally applicable is particularly
challenging due to the long-tail problem common in medical images. To address
this gap, here we present a Self-Sampling Meta SAM (SSM-SAM) framework for
few-shot medical image segmentation. Our innovation lies in the design of three
key modules: 1) An online fast gradient descent optimizer, further optimized by
a meta-learner, which ensures swift and robust adaptation to new tasks. 2) A
Self-Sampling module designed to provide well-aligned visual prompts for
improved attention allocation; and 3) A robust attention-based decoder
specifically designed for medical few-shot learning to capture relationship
between different slices. Extensive experiments on a popular abdominal CT
dataset and an MRI dataset demonstrate that the proposed method achieves
significant improvements over state-of-the-art methods in few-shot
segmentation, with an average improvements of 10.21% and 1.80% in terms of DSC,
respectively. In conclusion, we present a novel approach for rapid online
adaptation in interactive image segmentation, adapting to a new organ in just
0.83 minutes. Code is publicly available on GitHub upon acceptance.
Related papers
- MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.
Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.
We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - Retrieval-augmented Few-shot Medical Image Segmentation with Foundation Models [17.461510586128874]
We propose a novel method that adapts DINOv2 and Segment Anything Model 2 for retrieval-augmented few-shot medical image segmentation.
Our approach uses DINOv2's feature as query to retrieve similar samples from limited annotated data, which are then encoded as memories and stored in memory bank.
arXiv Detail & Related papers (2024-08-16T15:48:07Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - SM2C: Boost the Semi-supervised Segmentation for Medical Image by using Meta Pseudo Labels and Mixed Images [13.971120210536995]
We introduce Scaling-up Mix with Multi-Class (SM2C) to improve the ability to learn semantic features within medical images.
By diversifying the shape of the segmentation objects and enriching the semantic information within each sample, the SM2C demonstrates its potential.
The proposed framework shows significant improvements over state-of-the-art counterparts.
arXiv Detail & Related papers (2024-03-24T04:39:40Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - OneSeg: Self-learning and One-shot Learning based Single-slice
Annotation for 3D Medical Image Segmentation [36.50258132379276]
We propose a self-learning and one-shot learning based framework for 3D medical image segmentation by annotating only one slice of each 3D image.
Our approach takes two steps: (1) self-learning of a reconstruction network to learn semantic correspondence among 2D slices within 3D images, and (2) representative selection of single slices for one-shot manual annotation.
Our new framework achieves comparable performance with less than 1% annotated data compared with fully supervised methods.
arXiv Detail & Related papers (2023-09-24T15:35:58Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.