SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage
Segmentation
- URL: http://arxiv.org/abs/2311.08190v1
- Date: Tue, 14 Nov 2023 14:23:09 GMT
- Title: SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage
Segmentation
- Authors: Yinuo Wang, Kai Chen, Weimin Yuan, Cai Meng, XiangZhi Bai
- Abstract summary: Intracranial hemorrhage segmentation is a crucial and challenging step in stroke diagnosis and surgical planning.
We propose a SAM-based parameter-efficient fine-tuning method, called SAMIHS, for intracranial hemorrhage segmentation.
Our experimental results on two public datasets demonstrate the effectiveness of our proposed method.
- Score: 18.867207134086193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segment Anything Model (SAM), a vision foundation model trained on
large-scale annotations, has recently continued raising awareness within
medical image segmentation. Despite the impressive capabilities of SAM on
natural scenes, it struggles with performance decline when confronted with
medical images, especially those involving blurry boundaries and highly
irregular regions of low contrast. In this paper, a SAM-based
parameter-efficient fine-tuning method, called SAMIHS, is proposed for
intracranial hemorrhage segmentation, which is a crucial and challenging step
in stroke diagnosis and surgical planning. Distinguished from previous SAM and
SAM-based methods, SAMIHS incorporates parameter-refactoring adapters into
SAM's image encoder and considers the efficient and flexible utilization of
adapters' parameters. Additionally, we employ a combo loss that combines binary
cross-entropy loss and boundary-sensitive loss to enhance SAMIHS's ability to
recognize the boundary regions. Our experimental results on two public datasets
demonstrate the effectiveness of our proposed method. Code is available at
https://github.com/mileswyn/SAMIHS .
Related papers
- MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.
Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.
We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Uncertainty-Aware Adapter: Adapting Segment Anything Model (SAM) for Ambiguous Medical Image Segmentation [20.557472889654758]
The Segment Anything Model (SAM) gained significant success in natural image segmentation.
Unlike natural images, many tissues and lesions in medical images have blurry boundaries and may be ambiguous.
We propose a novel module called the Uncertainty-aware Adapter, which efficiently fine-tune SAM for uncertainty-aware medical image segmentation.
arXiv Detail & Related papers (2024-03-16T14:11:54Z) - ProMISe: Promptable Medical Image Segmentation using SAM [11.710367186709432]
We propose an Auto-Prompting Module (APM) which provides SAM-based foundation model with Euclidean adaptive prompts in the target domain.
We also propose a novel non-invasive method called Incremental Pattern Shifting (IPS) to adapt SAM to specific medical domains.
By coupling these two methods, we propose ProMISe, an end-to-end non-fine-tuned framework for Promptable Medical Image.
arXiv Detail & Related papers (2024-03-07T02:48:42Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - I-MedSAM: Implicit Medical Image Segmentation with Segment Anything [24.04558900909617]
We propose I-MedSAM, which leverages the benefits of both continuous representations and SAM to obtain better cross-domain ability and accurate boundary delineation.
Our proposed method with only 1.6M trainable parameters outperforms existing methods including discrete and implicit methods.
arXiv Detail & Related papers (2023-11-28T00:43:52Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - AMLP:Adaptive Masking Lesion Patches for Self-supervised Medical Image
Segmentation [67.97926983664676]
Self-supervised masked image modeling has shown promising results on natural images.
However, directly applying such methods to medical images remains challenging.
We propose a novel self-supervised medical image segmentation framework, Adaptive Masking Lesion Patches (AMLP)
arXiv Detail & Related papers (2023-09-08T13:18:10Z) - SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT [3.2495192768429924]
The Segment Anything Model (SAM) has gained significant attention in the field of image segmentation.
We conduct a comprehensive evaluation of SAM and its adaptations on a large-scale public dataset of OCTs from RETOUCH challenge.
We showcase adapted SAM's efficacy as a powerful segmentation model in retinal OCT scans, although still lagging behind established methods in some circumstances.
arXiv Detail & Related papers (2023-08-18T06:26:22Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.