Learnable Ophthalmology SAM
- URL: http://arxiv.org/abs/2304.13425v1
- Date: Wed, 26 Apr 2023 10:14:03 GMT
- Title: Learnable Ophthalmology SAM
- Authors: Zhongxi Qiu and Yan Hu and Heng Li and Jiang Liu
- Abstract summary: We propose a learnable prompt layer suitable for multiple target segmentation in ophthalmology multi-modal images.
The learnable prompt layer learns medical prior knowledge from each transformer layer.
We demonstrate the effectiveness of our thought based on four medical segmentation tasks based on nine publicly available datasets.
- Score: 7.179656139331778
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Segmentation is vital for ophthalmology image analysis. But its various modal
images hinder most of the existing segmentation algorithms applications, as
they rely on training based on a large number of labels or hold weak
generalization ability. Based on Segment Anything (SAM), we propose a simple
but effective learnable prompt layer suitable for multiple target segmentation
in ophthalmology multi-modal images, named Learnable Ophthalmology Segment
Anything (SAM). The learnable prompt layer learns medical prior knowledge from
each transformer layer. During training, we only train the prompt layer and
task head based on a one-shot mechanism. We demonstrate the effectiveness of
our thought based on four medical segmentation tasks based on nine publicly
available datasets. Moreover, we only provide a new improvement thought for
applying the existing fundamental CV models in the medical field. Our codes are
available at \href{https://github.com/Qsingle/LearnablePromptSAM}{website}.
Related papers
- Organ-aware Multi-scale Medical Image Segmentation Using Text Prompt Engineering [17.273290949721975]
Existing medical image segmentation methods rely on uni-modal visual inputs, such as images or videos, requiring labor-intensive manual annotations.
Medical imaging techniques capture multiple intertwined organs within a single scan, further complicating segmentation accuracy.
To address these challenges, MedSAM was developed to enhance segmentation accuracy by integrating image features with user-provided prompts.
arXiv Detail & Related papers (2025-03-18T01:35:34Z) - Dynamically evolving segment anything model with continuous learning for medical image segmentation [50.92344083895528]
We introduce EvoSAM, a dynamically evolving medical image segmentation model.
EvoSAM continuously accumulates new knowledge from an ever-expanding array of scenarios and tasks.
Experiments conducted by surgical clinicians on blood vessel segmentation confirm that EvoSAM enhances segmentation efficiency based on user prompts.
arXiv Detail & Related papers (2025-03-08T14:37:52Z) - Learnable Prompting SAM-induced Knowledge Distillation for Semi-supervised Medical Image Segmentation [47.789013598970925]
We propose a learnable prompting SAM-induced Knowledge distillation framework (KnowSAM) for semi-supervised medical image segmentation.
Our model outperforms the state-of-the-art semi-supervised segmentation approaches.
arXiv Detail & Related papers (2024-12-18T11:19:23Z) - MCP-MedSAM: A Powerful Lightweight Medical Segment Anything Model Trained with a Single GPU in Just One Day [0.6827423171182151]
Medical image segmentation involves partitioning medical images into meaningful regions, with a focus on identifying anatomical structures and lesions.<n>The Anything Model (SAM) has prompted researchers to adapt it for the medical domain to improve performance across various tasks.<n>We propose MCP-MedSAM, a powerful and lightweight medical SAM model designed to be trainable on a single A100 GPU with 40GB of memory within one day.
arXiv Detail & Related papers (2024-12-08T10:50:59Z) - MOSMOS: Multi-organ segmentation facilitated by medical report supervision [10.396987980136602]
We propose a novel pre-training & fine-tuning framework for Multi-Organ Supervision (MOS)
Specifically, we first introduce global contrastive learning to align medical image-report pairs in the pre-training stage.
To remedy the discrepancy, we further leverage multi-label recognition to implicitly learn the semantic correspondence between image pixels and organ tags.
arXiv Detail & Related papers (2024-09-04T03:46:17Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation [10.444726122035133]
We propose a simple unified framework, SaLIP, for organ segmentation.
SAM is used for part based segmentation within the image, followed by CLIP to retrieve the mask corresponding to the region of interest.
Finally, SAM is prompted by the retrieved ROI to segment a specific organ.
arXiv Detail & Related papers (2024-04-09T14:56:34Z) - Multi-Prompt Fine-Tuning of Foundation Models for Enhanced Medical Image
Segmentation [10.946806607643689]
The Segment Anything Model (SAM) is a powerful foundation model that introduced revolutionary advancements in natural image segmentation.
In this study, we introduce a novel fine-tuning framework that leverages SAM's ability to bundle and process multiple prompts per image.
arXiv Detail & Related papers (2023-10-03T19:05:00Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SamDSK: Combining Segment Anything Model with Domain-Specific Knowledge
for Semi-Supervised Learning in Medical Image Segmentation [27.044797468878837]
The Segment Anything Model (SAM) exhibits a capability to segment a wide array of objects in natural images.
We propose a novel method that combines the SAM with domain-specific knowledge for reliable utilization of unlabeled images.
Our work initiates a new direction of semi-supervised learning for medical image segmentation.
arXiv Detail & Related papers (2023-08-26T04:46:10Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - UniverSeg: Universal Medical Image Segmentation [16.19510845046103]
We present UniverSeg, a method for solving unseen medical segmentation tasks without additional training.
We have gathered and standardized a collection of 53 open-access medical segmentation datasets with over 22,000 scans.
We demonstrate that UniverSeg substantially outperforms several related methods on unseen tasks.
arXiv Detail & Related papers (2023-04-12T19:36:46Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Multi-Modal Masked Autoencoders for Medical Vision-and-Language
Pre-Training [62.215025958347105]
We propose a self-supervised learning paradigm with multi-modal masked autoencoders.
We learn cross-modal domain knowledge by reconstructing missing pixels and tokens from randomly masked images and texts.
arXiv Detail & Related papers (2022-09-15T07:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.