CEmb-SAM: Segment Anything Model with Condition Embedding for Joint
Learning from Heterogeneous Datasets
- URL: http://arxiv.org/abs/2308.06957v1
- Date: Mon, 14 Aug 2023 06:22:49 GMT
- Title: CEmb-SAM: Segment Anything Model with Condition Embedding for Joint
Learning from Heterogeneous Datasets
- Authors: Dongik Shin, Beomsuk Kim and Seungjun Baek
- Abstract summary: We consider the problem of jointly learning from heterogeneous datasets.
We merge the heterogeneous datasets into one dataset and refer to each component dataset as a subgroup.
Experiments show that Cemb-SAM outperforms the baseline methods on ultrasound image segmentation for peripheral nerves and breast cancer.
- Score: 3.894987097246834
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated segmentation of ultrasound images can assist medical experts with
diagnostic and therapeutic procedures. Although using the common modality of
ultrasound, one typically needs separate datasets in order to segment, for
example, different anatomical structures or lesions with different levels of
malignancy. In this paper, we consider the problem of jointly learning from
heterogeneous datasets so that the model can improve generalization abilities
by leveraging the inherent variability among datasets. We merge the
heterogeneous datasets into one dataset and refer to each component dataset as
a subgroup. We propose to train a single segmentation model so that the model
can adapt to each sub-group. For robust segmentation, we leverage recently
proposed Segment Anything model (SAM) in order to incorporate sub-group
information into the model. We propose SAM with Condition Embedding block
(CEmb-SAM) which encodes sub-group conditions and combines them with image
embeddings from SAM. The conditional embedding block effectively adapts SAM to
each image sub-group by incorporating dataset properties through learnable
parameters for normalization. Experiments show that CEmb-SAM outperforms the
baseline methods on ultrasound image segmentation for peripheral nerves and
breast cancer. The experiments highlight the effectiveness of Cemb-SAM in
learning from heterogeneous datasets in medical image segmentation tasks.
Related papers
- Toward Generalizable Multiple Sclerosis Lesion Segmentation Models [0.0]
This study aims to develop models that generalize across diverse evaluation datasets.
We used all high-quality publicly-available MS lesion segmentation datasets on which we systematically trained a state-of-the-art UNet++ architecture.
arXiv Detail & Related papers (2024-10-25T15:21:54Z) - MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.
Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.
We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - Uncertainty-Aware Adapter: Adapting Segment Anything Model (SAM) for Ambiguous Medical Image Segmentation [20.557472889654758]
The Segment Anything Model (SAM) gained significant success in natural image segmentation.
Unlike natural images, many tissues and lesions in medical images have blurry boundaries and may be ambiguous.
We propose a novel module called the Uncertainty-aware Adapter, which efficiently fine-tune SAM for uncertainty-aware medical image segmentation.
arXiv Detail & Related papers (2024-03-16T14:11:54Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - UniCell: Universal Cell Nucleus Classification via Prompt Learning [76.11864242047074]
We propose a universal cell nucleus classification framework (UniCell)
It employs a novel prompt learning mechanism to uniformly predict the corresponding categories of pathological images from different dataset domains.
In particular, our framework adopts an end-to-end architecture for nuclei detection and classification, and utilizes flexible prediction heads for adapting various datasets.
arXiv Detail & Related papers (2024-02-20T11:50:27Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - Semantic-SAM: Segment and Recognize Anything at Any Granularity [83.64686655044765]
We introduce Semantic-SAM, a universal image segmentation model to enable segment and recognize anything at any desired granularity.
We consolidate multiple datasets across three granularities and introduce decoupled classification for objects and parts.
For the multi-granularity capability, we propose a multi-choice learning scheme during training, enabling each click to generate masks at multiple levels.
arXiv Detail & Related papers (2023-07-10T17:59:40Z) - SAM: Self-supervised Learning of Pixel-wise Anatomical Embeddings in
Radiological Images [23.582516309813425]
We introduce Self-supervised Anatomical eMbedding (SAM) to learn the intrinsic structure from unlabeled images.
SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part.
We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities.
arXiv Detail & Related papers (2020-12-04T03:31:20Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.