EviPrompt: A Training-Free Evidential Prompt Generation Method for
Segment Anything Model in Medical Images
- URL: http://arxiv.org/abs/2311.06400v1
- Date: Fri, 10 Nov 2023 21:22:22 GMT
- Title: EviPrompt: A Training-Free Evidential Prompt Generation Method for
Segment Anything Model in Medical Images
- Authors: Yinsong Xu, Jiaqi Tang, Aidong Men, Qingchao Chen
- Abstract summary: Medical image segmentation has immense clinical applicability but remains a challenge despite advancements in deep learning.
This paper introduces a novel training-free evidential prompt generation method named EviPrompt to overcome these issues.
The proposed method, built on the inherent similarities within medical images, requires only a single reference image-annotation pair.
- Score: 14.899388051854084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation has immense clinical applicability but remains a
challenge despite advancements in deep learning. The Segment Anything Model
(SAM) exhibits potential in this field, yet the requirement for expertise
intervention and the domain gap between natural and medical images poses
significant obstacles. This paper introduces a novel training-free evidential
prompt generation method named EviPrompt to overcome these issues. The proposed
method, built on the inherent similarities within medical images, requires only
a single reference image-annotation pair, making it a training-free solution
that significantly reduces the need for extensive labeling and computational
resources. First, to automatically generate prompts for SAM in medical images,
we introduce an evidential method based on uncertainty estimation without the
interaction of clinical experts. Then, we incorporate the human prior into the
prompts, which is vital for alleviating the domain gap between natural and
medical images and enhancing the applicability and usefulness of SAM in medical
scenarios. EviPrompt represents an efficient and robust approach to medical
image segmentation, with evaluations across a broad range of tasks and
modalities confirming its efficacy.
Related papers
- Med-PerSAM: One-Shot Visual Prompt Tuning for Personalized Segment Anything Model in Medical Domain [30.700648813505158]
Leveraging pre-trained models with tailored prompts for in-context learning has proven highly effective in NLP tasks.
We introduce textbfMed-PerSAM, a novel and straightforward one-shot framework designed for the medical domain.
Our model outperforms various foundational models and previous SAM-based approaches across diverse 2D medical imaging datasets.
arXiv Detail & Related papers (2024-11-25T06:16:17Z) - Few Exemplar-Based General Medical Image Segmentation via Domain-Aware Selective Adaptation [28.186785488818135]
Medical image segmentation poses challenges due to domain gaps, data modality variations, and dependency on domain knowledge or experts.
We introduce a domain-aware selective adaptation approach to adapt the general knowledge learned from a large model trained with natural images to the corresponding medical domains/modalities.
arXiv Detail & Related papers (2024-10-11T21:00:57Z) - MedUHIP: Towards Human-In-the-Loop Medical Segmentation [5.520419627866446]
Medical image segmentation is particularly complicated by inherent uncertainties.
We propose a novel approach that integrates an textbfuncertainty-aware model with textbfhuman-in-the-loop interaction
Our method showcases superior segmentation capabilities, outperforming a wide range of deterministic and uncertainty-aware models.
arXiv Detail & Related papers (2024-08-03T01:06:02Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Segment Anything Model for Medical Image Segmentation: Current
Applications and Future Directions [8.216028136706948]
The recent introduction of the Segment Anything Model (SAM) signifies a noteworthy expansion of the prompt-driven paradigm into the domain of image segmentation.
We provide a comprehensive overview of recent endeavors aimed at extending the efficacy of SAM to medical image segmentation tasks.
We explore potential avenues for future research directions in SAM's role within medical image segmentation.
arXiv Detail & Related papers (2024-01-07T14:25:42Z) - Multi-task Paired Masking with Alignment Modeling for Medical
Vision-Language Pre-training [55.56609500764344]
We propose a unified framework based on Multi-task Paired Masking with Alignment (MPMA) to integrate the cross-modal alignment task into the joint image-text reconstruction framework.
We also introduce a Memory-Augmented Cross-Modal Fusion (MA-CMF) module to fully integrate visual information to assist report reconstruction.
arXiv Detail & Related papers (2023-05-13T13:53:48Z) - Swin Deformable Attention Hybrid U-Net for Medical Image Segmentation [3.407509559779547]
We propose to incorporate the Shifted Window (Swin) Deformable Attention into a hybrid architecture to improve segmentation performance.
Our proposed Swin Deformable Attention Hybrid UNet (SDAH-UNet) demonstrates state-of-the-art performance on both anatomical and lesion segmentation tasks.
arXiv Detail & Related papers (2023-02-28T09:54:53Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.