SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT
- URL: http://arxiv.org/abs/2308.09331v2
- Date: Thu, 31 Aug 2023 07:45:59 GMT
- Title: SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT
- Authors: Botond Fazekas, Jos\'e Morano, Dmitrii Lachinov, Guilherme Aresta,
Hrvoje Bogunovi\'c
- Abstract summary: The Segment Anything Model (SAM) has gained significant attention in the field of image segmentation.
We conduct a comprehensive evaluation of SAM and its adaptations on a large-scale public dataset of OCTs from RETOUCH challenge.
We showcase adapted SAM's efficacy as a powerful segmentation model in retinal OCT scans, although still lagging behind established methods in some circumstances.
- Score: 3.2495192768429924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Segment Anything Model (SAM) has gained significant attention in the
field of image segmentation due to its impressive capabilities and prompt-based
interface. While SAM has already been extensively evaluated in various domains,
its adaptation to retinal OCT scans remains unexplored. To bridge this research
gap, we conduct a comprehensive evaluation of SAM and its adaptations on a
large-scale public dataset of OCTs from RETOUCH challenge. Our evaluation
covers diverse retinal diseases, fluid compartments, and device vendors,
comparing SAM against state-of-the-art retinal fluid segmentation methods.
Through our analysis, we showcase adapted SAM's efficacy as a powerful
segmentation model in retinal OCT scans, although still lagging behind
established methods in some circumstances. The findings highlight SAM's
adaptability and robustness, showcasing its utility as a valuable tool in
retinal OCT image analysis and paving the way for further advancements in this
domain.
Related papers
- Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage
Segmentation [18.867207134086193]
Intracranial hemorrhage segmentation is a crucial and challenging step in stroke diagnosis and surgical planning.
We propose a SAM-based parameter-efficient fine-tuning method, called SAMIHS, for intracranial hemorrhage segmentation.
Our experimental results on two public datasets demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-11-14T14:23:09Z) - Evaluation and improvement of Segment Anything Model for interactive
histopathology image segmentation [3.677055050765245]
The Segment Anything Model (SAM) is a foundational model for image segmentation.
We evaluate SAM's performance in zero-shot and fine-tuned scenarios on histopathology data.
We propose a modification of SAM's decoder to make it useful for interactive histology image segmentation.
arXiv Detail & Related papers (2023-10-16T15:17:06Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SAM-Med2D [34.82072231983896]
We introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images.
We first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets.
We fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D.
arXiv Detail & Related papers (2023-08-30T17:59:02Z) - A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering [49.732628643634975]
The Segment Anything Model (SAM), developed by Meta AI Research, offers a robust framework for image and video segmentation.
This survey provides a comprehensive exploration of the SAM family, including SAM and SAM 2, highlighting their advancements in granularity and contextual understanding.
arXiv Detail & Related papers (2023-05-12T07:21:59Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - When SAM Meets Medical Images: An Investigation of Segment Anything
Model (SAM) on Multi-phase Liver Tumor Segmentation [4.154974672747996]
Segment Anything Model (SAM) performs the significant zero-shot image segmentation.
We investigate the capability of SAM for medical image analysis, especially for multi-phase liver tumor segmentation.
arXiv Detail & Related papers (2023-04-17T16:02:06Z) - SAM.MD: Zero-shot medical image segmentation capabilities of the Segment
Anything Model [1.1221592576472588]
We evaluate the zero-shot capabilities of the Segment Anything Model for medical image segmentation.
We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools.
arXiv Detail & Related papers (2023-04-10T18:20:29Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.