Text-Promptable Propagation for Referring Medical Image Sequence   Segmentation
        - URL: http://arxiv.org/abs/2502.11093v2
 - Date: Sat, 12 Apr 2025 15:10:07 GMT
 - Title: Text-Promptable Propagation for Referring Medical Image Sequence   Segmentation
 - Authors: Runtian Yuan, Mohan Chen, Jilan Xu, Ling Zhou, Qingqiu Li, Yuejie Zhang, Rui Feng, Tao Zhang, Shang Gao, 
 - Abstract summary: Ref-MISS aims to segment anatomical structures in medical image sequences based on natural language descriptions.<n>Existing 2D and 3D segmentation models struggle to explicitly track objects of interest across medical image sequences.<n>We propose Text-Promptable Propagation (TPP), a model designed for referring medical image sequence segmentation.
 - Score: 20.724643106195852
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Referring Medical Image Sequence Segmentation (Ref-MISS) is a novel and challenging task that aims to segment anatomical structures in medical image sequences (\emph{e.g.} endoscopy, ultrasound, CT, and MRI) based on natural language descriptions. This task holds significant clinical potential and offers a user-friendly advancement in medical imaging interpretation. Existing 2D and 3D segmentation models struggle to explicitly track objects of interest across medical image sequences, and lack support for nteractive, text-driven guidance. To address these limitations, we propose Text-Promptable Propagation (TPP), a model designed for referring medical image sequence segmentation. TPP captures the intrinsic relationships among sequential images along with their associated textual descriptions. Specifically, it enables the recognition of referred objects through cross-modal referring interaction, and maintains continuous tracking across the sequence via Transformer-based triple propagation, using text embeddings as queries. To support this task, we curate a large-scale benchmark, Ref-MISS-Bench, which covers 4 imaging modalities and 20 different organs and lesions. Experimental results on this benchmark demonstrate that TPP consistently outperforms state-of-the-art methods in both medical segmentation and referring video object segmentation. 
 
       
      
        Related papers
        - Text-driven Multiplanar Visual Interaction for Semi-supervised Medical   Image Segmentation [48.76848912120607]
Semi-supervised medical image segmentation is a crucial technique for alleviating the high cost of data annotation.<n>We propose a novel text-driven multiplanar visual interaction framework for semi-supervised medical image segmentation (termed Text-SemiSeg)<n>Our framework consists of three main modules: Text-enhanced Multiplanar Representation (TMR), Category-aware Semantic Alignment (CSA), and Dynamic Cognitive Augmentation (DCA)
arXiv  Detail & Related papers  (2025-07-16T16:29:30Z) - CRISP-SAM2: SAM2 with Cross-Modal Interaction and Semantic Prompting for   Multi-Organ Segmentation [32.48945636401865]
We introduce a novel model named CRISP-SAM2 with CRoss-modal Interaction and Semantic Prompting based on SAM2.<n>This model represents a promising approach to multi-organ medical segmentation guided by textual descriptions of organs.<n>Our method begins by converting visual and textual inputs into cross-modal contextualized semantics.
arXiv  Detail & Related papers  (2025-06-29T07:05:27Z) - Multimodal Medical Image Binding via Shared Text Embeddings [15.873810726442603]
Multimodal Medical Image Binding with Text (Mtextsuperscript3Bind) is a novel pre-training framework that enables seamless alignment of medical imaging modalities.<n>Mtextsuperscript3Bind first fine-tunes CLIP-like image-text models to align their modality-specific text embedding space.<n>We show that Mtextsuperscript3Bind achieves state-of-the-art performance in zero-shot, few-shot classification and cross-modal retrieval tasks.
arXiv  Detail & Related papers  (2025-06-22T15:39:25Z) - MedSeg-R: Reasoning Segmentation in Medical Images with Multimodal Large   Language Models [48.24824129683951]
We introduce medical image reasoning segmentation, a novel task that aims to generate segmentation masks based on complex and implicit medical instructions.<n>To address this, we propose MedSeg-R, an end-to-end framework that leverages the reasoning abilities of MLLMs to interpret clinical questions.<n>It is built on two core components: 1) a global context understanding module that interprets images and comprehends complex medical instructions to generate multi-modal intermediate tokens, and 2) a pixel-level grounding module that decodes these tokens to produce precise segmentation masks.
arXiv  Detail & Related papers  (2025-06-12T08:13:38Z) - STPNet: Scale-aware Text Prompt Network for Medical Image Segmentation [8.812162673772459]
We propose a Scale-language Text Prompt Network that leverages vision-aware modeling to enhance medical image segmentation.
Our approach utilizes multi-scale textual descriptions to guide lesion localization and employs retrieval-segmentation joint learning.
We evaluate our vision-language approach on three datasets: COVID-Xray, COVID-CT, and Kvasir-SEG.
arXiv  Detail & Related papers  (2025-04-02T10:01:42Z) - Organ-aware Multi-scale Medical Image Segmentation Using Text Prompt   Engineering [17.273290949721975]
Existing medical image segmentation methods rely on uni-modal visual inputs, such as images or videos, requiring labor-intensive manual annotations.
Medical imaging techniques capture multiple intertwined organs within a single scan, further complicating segmentation accuracy.
To address these challenges, MedSAM was developed to enhance segmentation accuracy by integrating image features with user-provided prompts.
arXiv  Detail & Related papers  (2025-03-18T01:35:34Z) - Language-guided Medical Image Segmentation with Target-informed   Multi-level Contrastive Alignments [7.9714765680840625]
We propose a language-guided segmentation network with Target-informed Multi-level Contrastive Alignments (TMCA)
TMCA enables target-informed cross-modality alignments and fine-grained text guidance to bridge the pattern gaps in language-guided segmentation.
arXiv  Detail & Related papers  (2024-12-18T06:19:03Z) - Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv  Detail & Related papers  (2024-09-13T10:19:10Z) - SimTxtSeg: Weakly-Supervised Medical Image Segmentation with Simple Text   Cues [11.856041847833666]
We present a novel framework, SimTxtSeg, that leverages simple text cues to generate high-quality pseudo-labels.
We evaluate our framework on two medical image segmentation tasks: colonic polyp segmentation and MRI brain tumor segmentation.
arXiv  Detail & Related papers  (2024-06-27T17:46:13Z) - CAT: Coordinating Anatomical-Textual Prompts for Multi-Organ and Tumor   Segmentation [11.087654014615955]
We introduce CAT, an innovative model that Coordinates Anatomical prompts derived from 3D cropped images with Textual prompts enriched by medical domain knowledge.
Trained on a consortium of 10 public CT datasets, CAT demonstrates superior performance in multiple segmentation tasks.
This approach confirms that coordinating multimodal prompts is a promising avenue for addressing complex scenarios in the medical domain.
arXiv  Detail & Related papers  (2024-06-11T09:22:39Z) - Unlocking the Power of Spatial and Temporal Information in Medical   Multimodal Pre-training [99.2891802841936]
We introduce the Med-ST framework for fine-grained spatial and temporal modeling.
For spatial modeling, Med-ST employs the Mixture of View Expert (MoVE) architecture to integrate different visual features from both frontal and lateral views.
For temporal modeling, we propose a novel cross-modal bidirectional cycle consistency objective by forward mapping classification (FMC) and reverse mapping regression (RMR)
arXiv  Detail & Related papers  (2024-05-30T03:15:09Z) - CT-GLIP: 3D Grounded Language-Image Pretraining with CT Scans and   Radiology Reports for Full-Body Scenarios [53.94122089629544]
We introduce CT-GLIP (Grounded Language-Image Pretraining with CT scans), a novel method that constructs organ-level image-text pairs to enhance multimodal contrastive learning.
Our method, trained on a multimodal CT dataset comprising 44,011 organ-level vision-text pairs from 17,702 patients across 104 organs, demonstrates it can identify organs and abnormalities in a zero-shot manner using natural languages.
arXiv  Detail & Related papers  (2024-04-23T17:59:01Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation   Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv  Detail & Related papers  (2024-03-19T17:57:24Z) - Eye-gaze Guided Multi-modal Alignment for Medical Representation   Learning [65.54680361074882]
Eye-gaze Guided Multi-modal Alignment (EGMA) framework harnesses eye-gaze data for better alignment of medical visual and textual features.
We conduct downstream tasks of image classification and image-text retrieval on four medical datasets.
arXiv  Detail & Related papers  (2024-03-19T03:59:14Z) - Unified Medical Image Pre-training in Language-Guided Common Semantic   Space [39.61770813855078]
We propose an Unified Medical Image Pre-training framework, namely UniMedI.
UniMedI uses diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images.
We evaluate its performance on both 2D and 3D images across 10 different datasets.
arXiv  Detail & Related papers  (2023-11-24T22:01:12Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
  Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv  Detail & Related papers  (2023-01-11T16:35:33Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv  Detail & Related papers  (2021-08-02T10:42:52Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv  Detail & Related papers  (2020-03-23T14:35:08Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.