CrossMed: A Multimodal Cross-Task Benchmark for Compositional Generalization in Medical Imaging
- URL: http://arxiv.org/abs/2511.11034v1
- Date: Fri, 14 Nov 2025 07:41:01 GMT
- Title: CrossMed: A Multimodal Cross-Task Benchmark for Compositional Generalization in Medical Imaging
- Authors: Pooja Singh, Siddhant Ujjain, Tapan Kumar Gandhi, Sandeep Kumar,
- Abstract summary: We introduce CrossMed, a benchmark to evaluate compositional generalization (CG) in medical vision-language models.<n>We reformulate four public datasets into a unified visual question answering (VQA) format, resulting in 20,200 multiple-choice QA instances.<n>Models trained on Related splits achieve 83.2 percent classification accuracy and 0.75 segmentation cIoU, while performance drops significantly under Unrelated and zero-overlap conditions.
- Score: 2.9857131541387827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in multimodal large language models have enabled unified processing of visual and textual inputs, offering promising applications in general-purpose medical AI. However, their ability to generalize compositionally across unseen combinations of imaging modality, anatomy, and task type remains underexplored. We introduce CrossMed, a benchmark designed to evaluate compositional generalization (CG) in medical multimodal LLMs using a structured Modality-Anatomy-Task (MAT) schema. CrossMed reformulates four public datasets, CheXpert (X-ray classification), SIIM-ACR (X-ray segmentation), BraTS 2020 (MRI classification and segmentation), and MosMedData (CT classification) into a unified visual question answering (VQA) format, resulting in 20,200 multiple-choice QA instances. We evaluate two open-source multimodal LLMs, LLaVA-Vicuna-7B and Qwen2-VL-7B, on both Related and Unrelated MAT splits, as well as a zero-overlap setting where test triplets share no Modality, Anatomy, or Task with the training data. Models trained on Related splits achieve 83.2 percent classification accuracy and 0.75 segmentation cIoU, while performance drops significantly under Unrelated and zero-overlap conditions, demonstrating the benchmark difficulty. We also show cross-task transfer, where segmentation performance improves by 7 percent cIoU even when trained using classification-only data. Traditional models (ResNet-50 and U-Net) show modest gains, confirming the broad utility of the MAT framework, while multimodal LLMs uniquely excel at compositional generalization. CrossMed provides a rigorous testbed for evaluating zero-shot, cross-task, and modality-agnostic generalization in medical vision-language models.
Related papers
- SurgMLLMBench: A Multimodal Large Language Model Benchmark Dataset for Surgical Scene Understanding [8.20483591990742]
We present SurgMLLMBench, a unified benchmark for developing and evaluating interactive multimodal large language models.<n>It integrates pixel-level instrument segmentation masks and structured VQA annotations across laparoscopic, robot-assisted, and micro-surgical domains.<n>It achieves consistent performance across domains and generalizes effectively to unseen datasets.
arXiv Detail & Related papers (2025-11-26T12:44:51Z) - Medverse: A Universal Model for Full-Resolution 3D Medical Image Segmentation, Transformation and Enhancement [15.28003304776022]
In-context learning offers a promising paradigm for universal medical image analysis.<n>We present textbfMedverse, a universal ICL model for 3D medical imaging trained on 22 datasets.<n>Medverse employs a next-scale autoregressive in-context learning framework that progressively refines predictions from coarse to fine.
arXiv Detail & Related papers (2025-09-11T08:10:49Z) - MedSeqFT: Sequential Fine-tuning Foundation Models for 3D Medical Image Segmentation [55.37355146924576]
MedSeqFT is a sequential fine-tuning framework for medical image analysis.<n>It adapts pre-trained models to new tasks while refining their representational capacity.<n>It consistently outperforms state-of-the-art fine-tuning strategies.
arXiv Detail & Related papers (2025-09-07T15:22:53Z) - PiCME: Pipeline for Contrastive Modality Evaluation and Encoding in the MIMIC Dataset [16.263862005367667]
Multimodal deep learning holds promise for improving clinical prediction by integrating diverse patient data.<n>Contrastive learning facilitates this integration by producing a unified representation that can be reused across tasks.<n>PiCME is the first to scale contrastive learning across all modality combinations in MIMIC.
arXiv Detail & Related papers (2025-07-03T20:45:37Z) - Multimodal Masked Autoencoder Pre-training for 3D MRI-Based Brain Tumor Analysis with Missing Modalities [0.0]
BM-MAE is a masked image modeling pre-training strategy tailored for multimodal MRI data.<n>It seamlessly adapts to any combination of available modalities, extracting rich representations that capture both intra- and inter-modal information.<n>It can quickly and efficiently reconstruct missing modalities, highlighting its practical value.
arXiv Detail & Related papers (2025-05-01T14:51:30Z) - Exploring Compositional Generalization of Multimodal LLMs for Medical Imaging [14.419190976672065]
multimodal large language models (MLLMs) are increasingly utilized for their analysis due to their strong generalization capabilities.<n>We attempted to employ compositional generalization (CG) which refers to the models' ability to understand novel combinations.<n>Experiments confirmed that MLLMs can use CG to understand unseen medical images and identified CG as one of the main drivers of the generalization observed in multi-task training.
arXiv Detail & Related papers (2024-12-28T07:50:00Z) - VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks [60.5257456681402]
We study the potential for building universal embeddings capable of handling a wide range of downstream tasks.<n>We build a series of VLM2Vec models on SoTA VLMs like Phi-3.5-V, LLaVA-1.6 and evaluate them on MMEB's evaluation split.<n>Our results show that VLM2Vec achieves an absolute average improvement of 10% to 20% over existing multimodal embedding models.
arXiv Detail & Related papers (2024-10-07T16:14:05Z) - Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation [113.5002649181103]
Training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology.
For training, we assemble a large dataset of over 697 thousand radiology image-text pairs.
For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation.
The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
arXiv Detail & Related papers (2024-03-12T18:12:02Z) - C^2M-DoT: Cross-modal consistent multi-view medical report generation
with domain transfer network [67.97926983664676]
We propose a cross-modal consistent multi-view medical report generation with a domain transfer network (C2M-DoT)
C2M-DoT substantially outperforms state-of-the-art baselines in all metrics.
arXiv Detail & Related papers (2023-10-09T02:31:36Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - MedFuse: Multi-modal fusion with clinical time-series data and chest
X-ray images [3.6615129560354527]
Multi-modal fusion approaches aim to integrate information from different data sources.
Unlike natural datasets, such as in audio-visual applications, data in healthcare is often collected asynchronously.
We propose MedFuse, a conceptually simple yet promising LSTM-based fusion module that can accommodate uni-modal as well as multi-modal input.
arXiv Detail & Related papers (2022-07-14T15:59:03Z) - Competence-based Multimodal Curriculum Learning for Medical Report
Generation [98.10763792453925]
We propose a Competence-based Multimodal Curriculum Learning framework ( CMCL) to alleviate the data bias and make best use of available data.
Specifically, CMCL simulates the learning process of radiologists and optimize the model in a step by step manner.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that CMCL can be incorporated into existing models to improve their performance.
arXiv Detail & Related papers (2022-06-24T08:16:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.