Medal S: Spatio-Textual Prompt Model for Medical Segmentation
- URL: http://arxiv.org/abs/2511.13001v1
- Date: Mon, 17 Nov 2025 05:44:19 GMT
- Title: Medal S: Spatio-Textual Prompt Model for Medical Segmentation
- Authors: Pengcheng Shi, Jiawei Chen, Jiaqi Liu, Xinglin Zhang, Tao Chen, Lei Li,
- Abstract summary: Medal S supports native-resolution spatial and textual prompts within an end-to-end trainable framework.<n>It efficiently processes multiple native-resolution masks in parallel, enhancing multi-class segmentation performance.<n>A lightweight 3D convolutional module enables precise voxel-space refinement guided by both prompt types.
- Score: 19.872612663709656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Medal S, a medical segmentation foundation model that supports native-resolution spatial and textual prompts within an end-to-end trainable framework. Unlike text-only methods lacking spatial awareness, Medal S achieves channel-wise alignment between volumetric prompts and text embeddings, mitigating inaccuracies from resolution mismatches. By preserving full 3D context, it efficiently processes multiple native-resolution masks in parallel, enhancing multi-class segmentation performance. A lightweight 3D convolutional module enables precise voxel-space refinement guided by both prompt types, supporting up to 243 classes across CT, MRI, PET, ultrasound, and microscopy modalities in the BiomedSegFM dataset. Medal S offers two prompting modes: a text-only mode, where model predictions serve as spatial prompts for self-refinement without human input, and a hybrid mode, incorporating manual annotations for enhanced flexibility. For 24-class segmentation, parallel spatial prompting reduces inference time by more than 90% compared to sequential prompting. We propose dynamic resampling to address target-patch ratio imbalance, extending SAT and nnU-Net for data augmentation. Furthermore, we develop optimized text preprocessing, a two-stage inference strategy, and post-processing techniques to improve memory efficiency, precision, and inference speed. On the five-modality average on the validation set, Medal S outperforms SAT with a DSC of 75.44 (vs. 69.83), NSD of 77.34 (vs. 71.06), F1 of 38.24 (vs. 24.88), and DSC TP of 65.46 (vs. 46.97). Medal S achieves excellent performance by harmonizing spatial precision with semantic textual guidance, demonstrating superior efficiency and accuracy in multi-class medical segmentation tasks compared to sequential prompt-based approaches. Medal S will be publicly available at https://github.com/yinghemedical/Medal-S.
Related papers
- StruMamba3D: Exploring Structural Mamba for Self-supervised Point Cloud Representation Learning [31.585380521480868]
We propose StruMamba3D, a novel paradigm for self-supervised point cloud representation learning.<n>We design spatial states and use them as proxies to preserve spatial dependencies among points.<n>Our method attains the SOTA 95.1% accuracy on ModelNet40 and 92.75% accuracy on the most challenging split of ScanObjectNN without voting strategy.
arXiv Detail & Related papers (2025-06-26T17:58:05Z) - BiPrompt-SAM: Enhancing Image Segmentation via Explicit Selection between Point and Text Prompts [2.2261951153501274]
BiPrompt-SAM is a novel dual-modal prompt segmentation framework.<n>It fuses spatial precision and semantic context without complex model modifications.<n>It achieves strong zero-shot performance on the Endovis17 medical dataset.
arXiv Detail & Related papers (2025-03-25T15:38:55Z) - Coarse Correspondences Boost Spatial-Temporal Reasoning in Multimodal Language Model [51.83436609094658]
We introduce Coarse Correspondences, a simple lightweight method that enhances MLLMs' spatial-temporal reasoning with 2D images as input.
Our method uses a lightweight tracking model to identify primary object correspondences between frames in a video or across different image viewpoints.
We demonstrate that this simple training-free approach brings substantial gains to GPT4-V/O consistently on four benchmarks.
arXiv Detail & Related papers (2024-08-01T17:57:12Z) - AToM: Amortized Text-to-Mesh using 2D Diffusion [107.02696990299032]
Amortized Text-to-Mesh (AToM) is a feed-forward framework optimized across multiple text prompts simultaneously.
AToM directly generates high-quality textured meshes in less than 1 second with around 10 times reduction in the training cost.
AToM significantly outperforms state-of-the-art amortized approaches with over 4 times higher accuracy.
arXiv Detail & Related papers (2024-02-01T18:59:56Z) - Large-Vocabulary Segmentation for Medical Images with Text Prompts [68.9193694019039]
This paper aims to build a model that can Segment Anything in 3D medical images, driven by medical terminologies as Text prompts, termed as SAT.<n>We construct the first multimodal knowledge tree on human anatomy, including 6502 anatomical terminologies.<n>We build the largest and most comprehensive segmentation dataset for training, collecting over 22K 3D scans from 72 datasets.
arXiv Detail & Related papers (2023-12-28T18:16:00Z) - Towards Robust Real-Time Scene Text Detection: From Semantic to Instance
Representation Learning [19.856492291263102]
We propose representation learning for real-time scene text detection.
For semantic representation learning, we propose global-dense semantic contrast (GDSC) and top-down modeling (TDM)
With the proposed GDSC and TDM, the encoder network learns stronger representation without introducing any parameters and computations during inference.
The proposed method achieves 87.2% F-measure with 48.2 FPS on Total-Text and 89.6% F-measure with 36.9 FPS on MSRA-TD500.
arXiv Detail & Related papers (2023-08-14T15:14:37Z) - DWRSeg: Rethinking Efficient Acquisition of Multi-scale Contextual
Information for Real-time Semantic Segmentation [10.379708894083217]
We propose a highly efficient multi-scale feature extraction method, which decomposes the original single-step method into two steps, Region Residualization-Semantic Residualization.
We achieve an mIoU of 72.7% on the Cityscapes test set at a speed of 319.5 FPS on one NVIDIA GeForce GTX 1080 Ti card, which exceeds the latest methods of a speed of 69.5 FPS and 0.8% mIoU.
arXiv Detail & Related papers (2022-12-02T13:55:41Z) - Real-Time Scene Text Detection with Differentiable Binarization and
Adaptive Scale Fusion [62.269219152425556]
segmentation-based scene text detection methods have drawn extensive attention in the scene text detection field.
We propose a Differentiable Binarization (DB) module that integrates the binarization process into a segmentation network.
An efficient Adaptive Scale Fusion (ASF) module is proposed to improve the scale robustness by fusing features of different scales adaptively.
arXiv Detail & Related papers (2022-02-21T15:30:14Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Depth-Adaptive Graph Recurrent Network for Text Classification [71.20237659479703]
Sentence-State LSTM (S-LSTM) is a powerful and high efficient graph recurrent network.
We propose a depth-adaptive mechanism for the S-LSTM, which allows the model to learn how many computational steps to conduct for different words as required.
arXiv Detail & Related papers (2020-02-29T03:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.