MedSeg-R: Reasoning Segmentation in Medical Images with Multimodal Large Language Models
- URL: http://arxiv.org/abs/2506.10465v1
- Date: Thu, 12 Jun 2025 08:13:38 GMT
- Title: MedSeg-R: Reasoning Segmentation in Medical Images with Multimodal Large Language Models
- Authors: Yu Huang, Zelin Peng, Yichen Zhao, Piao Yang, Xiaokang Yang, Wei Shen,
- Abstract summary: We introduce medical image reasoning segmentation, a novel task that aims to generate segmentation masks based on complex and implicit medical instructions.<n>To address this, we propose MedSeg-R, an end-to-end framework that leverages the reasoning abilities of MLLMs to interpret clinical questions.<n>It is built on two core components: 1) a global context understanding module that interprets images and comprehends complex medical instructions to generate multi-modal intermediate tokens, and 2) a pixel-level grounding module that decodes these tokens to produce precise segmentation masks.
- Score: 48.24824129683951
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Medical image segmentation is crucial for clinical diagnosis, yet existing models are limited by their reliance on explicit human instructions and lack the active reasoning capabilities to understand complex clinical questions. While recent advancements in multimodal large language models (MLLMs) have improved medical question-answering (QA) tasks, most methods struggle to generate precise segmentation masks, limiting their application in automatic medical diagnosis. In this paper, we introduce medical image reasoning segmentation, a novel task that aims to generate segmentation masks based on complex and implicit medical instructions. To address this, we propose MedSeg-R, an end-to-end framework that leverages the reasoning abilities of MLLMs to interpret clinical questions while also capable of producing corresponding precise segmentation masks for medical images. It is built on two core components: 1) a global context understanding module that interprets images and comprehends complex medical instructions to generate multi-modal intermediate tokens, and 2) a pixel-level grounding module that decodes these tokens to produce precise segmentation masks and textual responses. Furthermore, we introduce MedSeg-QA, a large-scale dataset tailored for the medical image reasoning segmentation task. It includes over 10,000 image-mask pairs and multi-turn conversations, automatically annotated using large language models and refined through physician reviews. Experiments show MedSeg-R's superior performance across several benchmarks, achieving high segmentation accuracy and enabling interpretable textual analysis of medical images.
Related papers
- Multimodal Causal-Driven Representation Learning for Generalizable Medical Image Segmentation [56.52520416420957]
We propose Multimodal Causal-Driven Representation Learning (MCDRL) to tackle domain generalization in medical image segmentation.<n>MCDRL consistently outperforms competing methods, yielding superior segmentation accuracy and exhibiting robust generalizability.
arXiv Detail & Related papers (2025-08-07T03:41:41Z) - CRISP-SAM2: SAM2 with Cross-Modal Interaction and Semantic Prompting for Multi-Organ Segmentation [32.48945636401865]
We introduce a novel model named CRISP-SAM2 with CRoss-modal Interaction and Semantic Prompting based on SAM2.<n>This model represents a promising approach to multi-organ medical segmentation guided by textual descriptions of organs.<n>Our method begins by converting visual and textual inputs into cross-modal contextualized semantics.
arXiv Detail & Related papers (2025-06-29T07:05:27Z) - PRS-Med: Position Reasoning Segmentation with Vision-Language Model in Medical Imaging [6.411386758550256]
PRS-Med is a framework that integrates vision-language models with segmentation capabilities to generate both accurate segmentation masks and corresponding spatial reasoning outputs.<n> MMRS dataset provides diverse, spatially-grounded question-answer pairs to address the lack of position reasoning data in medical imaging.
arXiv Detail & Related papers (2025-05-17T06:42:28Z) - MediSee: Reasoning-based Pixel-level Perception in Medical Images [6.405810587061276]
We introduce a novel medical vision task: Medical Reasoning and Detection (MedSD)<n>MedSD aims to comprehend implicit queries about medical images and generate the corresponding segmentation mask and bounding box for the target object.<n>We propose MediSee, an effective baseline model designed for medical reasoning segmentation and detection.
arXiv Detail & Related papers (2025-04-15T09:28:53Z) - Organ-aware Multi-scale Medical Image Segmentation Using Text Prompt Engineering [17.273290949721975]
Existing medical image segmentation methods rely on uni-modal visual inputs, such as images or videos, requiring labor-intensive manual annotations.<n>Medical imaging techniques capture multiple intertwined organs within a single scan, further complicating segmentation accuracy.<n>To address these challenges, MedSAM was developed to enhance segmentation accuracy by integrating image features with user-provided prompts.
arXiv Detail & Related papers (2025-03-18T01:35:34Z) - Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine [9.881981672848598]
We introduce a novel end-to-end multimodal large language model for the biomedical domain, named MedPLIB.<n>It supports visual question answering (VQA), arbitrary pixel-level prompts (points, bounding boxes, and free-form shapes), and pixel-level grounding.<n>Results indicate that MedPLIB has achieved state-of-the-art outcomes across multiple medical visual language tasks.
arXiv Detail & Related papers (2024-12-12T13:41:35Z) - MRGen: Segmentation Data Engine For Underrepresented MRI Modalities [59.61465292965639]
Training medical image segmentation models for rare yet clinically significant imaging modalities is challenging due to the scarcity of annotated data.<n>This paper investigates leveraging generative models to synthesize training data, to train segmentation models for underrepresented modalities.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - A Survey of Medical Vision-and-Language Applications and Their Techniques [48.268198631277315]
Medical vision-and-language models (MVLMs) have attracted substantial interest due to their capability to offer a natural language interface for interpreting complex medical data.
Here, we provide a comprehensive overview of MVLMs and the various medical tasks to which they have been applied.
We also examine the datasets used for these tasks and compare the performance of different models based on standardized evaluation metrics.
arXiv Detail & Related papers (2024-11-19T03:27:05Z) - LIMIS: Towards Language-based Interactive Medical Image Segmentation [58.553786162527686]
LIMIS is the first purely language-based interactive medical image segmentation model.
We adapt Grounded SAM to the medical domain and design a language-based model interaction strategy.
We evaluate LIMIS on three publicly available medical datasets in terms of performance and usability.
arXiv Detail & Related papers (2024-10-22T12:13:47Z) - Language-guided Scale-aware MedSegmentor for Lesion Segmentation in Medical Imaging [7.912408164613206]
In clinical practice, segmenting specific lesions can significantly enhance diagnostic accuracy and treatment efficiency.<n>We propose a novel model, Language-guided Scale-aware MedSegmentor (LSMS), which segments target lesions in medical images based on given textual expressions.<n>Our LSMS consistently achieves superior performance with significantly lower computational cost.
arXiv Detail & Related papers (2024-08-30T15:22:13Z) - XrayGPT: Chest Radiographs Summarization using Medical Vision-Language Models [72.8965643836841]
We introduce XrayGPT, a novel conversational medical vision-language model.<n>It can analyze and answer open-ended questions about chest radiographs.<n>We generate 217k interactive and high-quality summaries from free-text radiology reports.
arXiv Detail & Related papers (2023-06-13T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.