A Language-Signal-Vision Multimodal Framework for Multitask Cardiac Analysis
- URL: http://arxiv.org/abs/2508.13072v1
- Date: Mon, 18 Aug 2025 16:43:31 GMT
- Title: A Language-Signal-Vision Multimodal Framework for Multitask Cardiac Analysis
- Authors: Yuting Zhang, Tiantian Geng, Luoying Hao, Xinxing Cheng, Alexander Thorley, Xiaoxia Wang, Wenqi Lu, Sandeep S Hothi, Lei Wei, Zhaowen Qiu, Dipak Kotecha, Jinming Duan,
- Abstract summary: Textual Guidance Multimodal fusion for Multiple cardiac tasks (TGMM) was developed.<n>This study systematically explored key features across multiple modalities and elucidated their synergistic contributions in clinical decision-making.
- Score: 37.18952260878238
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Contemporary cardiovascular management involves complex consideration and integration of multimodal cardiac datasets, where each modality provides distinct but complementary physiological characteristics. While the effective integration of multiple modalities could yield a holistic clinical profile that accurately models the true clinical situation with respect to data modalities and their relatives weightings, current methodologies remain limited by: 1) the scarcity of patient- and time-aligned multimodal data; 2) reliance on isolated single-modality or rigid multimodal input combinations; 3) alignment strategies that prioritize cross-modal similarity over complementarity; and 4) a narrow single-task focus. In response to these limitations, a comprehensive multimodal dataset was curated for immediate application, integrating laboratory test results, electrocardiograms, and echocardiograms with clinical outcomes. Subsequently, a unified framework, Textual Guidance Multimodal fusion for Multiple cardiac tasks (TGMM), was proposed. TGMM incorporated three key components: 1) a MedFlexFusion module designed to capture the unique and complementary characteristics of medical modalities and dynamically integrate data from diverse cardiac sources and their combinations; 2) a textual guidance module to derive task-relevant representations tailored to diverse clinical objectives, including heart disease diagnosis, risk stratification and information retrieval; and 3) a response module to produce final decisions for all these tasks. Furthermore, this study systematically explored key features across multiple modalities and elucidated their synergistic contributions in clinical decision-making. Extensive experiments showed that TGMM outperformed state-of-the-art methods across multiple clinical tasks, with additional validation confirming its robustness on another public dataset.
Related papers
- MMedExpert-R1: Strengthening Multimodal Medical Reasoning via Domain-Specific Adaptation and Clinical Guideline Reinforcement [63.82954136824963]
Medical Vision-Language Models excel at perception tasks with complex clinical reasoning required in real-world scenarios.<n>We propose a novel reasoning MedVLM that addresses these challenges through domain-specific adaptation and guideline reinforcement.
arXiv Detail & Related papers (2026-01-16T02:32:07Z) - MedPatch: Confidence-Guided Multi-Stage Fusion for Multimodal Clinical Data [0.46040036610482665]
Real-world medical data is heterogeneous in nature, limited in size, and sparse due to missing modalities.<n>Inspired by clinical prediction tasks, we introduce MedPatch, which seamlessly integrates multiple modalities via confidence-guided patching.<n>We evaluate MedPatch using real-world data consisting of clinical time-series data, chest X-ray images, radiology reports, and discharge notes extracted from the MIMIC-IV, MIMIC-CXR, and MIMIC-Notes datasets.
arXiv Detail & Related papers (2025-08-07T12:46:26Z) - Heartcare Suite: Multi-dimensional Understanding of ECG with Raw Multi-lead Signal Modeling [50.58126509704037]
Heartcare Suite is a framework for fine-grained electrocardiogram (ECG) understanding.<n>Heartcare-220K is a high-quality, structured, and comprehensive multimodal ECG dataset.<n>Heartcare-Bench is a benchmark to guide the optimization of Medical Multimodal Large Language Models (Med-MLLMs) in ECG scenarios.
arXiv Detail & Related papers (2025-06-06T07:56:41Z) - Continually Evolved Multimodal Foundation Models for Cancer Prognosis [50.43145292874533]
Cancer prognosis is a critical task that involves predicting patient outcomes and survival rates.<n>Previous studies have integrated diverse data modalities, such as clinical notes, medical images, and genomic data, leveraging their complementary information.<n>Existing approaches face two major limitations. First, they struggle to incorporate newly arrived data with varying distributions into training, such as patient records from different hospitals.<n>Second, most multimodal integration methods rely on simplistic concatenation or task-specific pipelines, which fail to capture the complex interdependencies across modalities.
arXiv Detail & Related papers (2025-01-30T06:49:57Z) - Contrastive Learning on Multimodal Analysis of Electronic Health Records [15.392566551086782]
We propose a novel feature embedding generative model and design a multimodal contrastive loss to obtain the multimodal EHR feature representation.
Our theoretical analysis demonstrates the effectiveness of multimodal learning compared to single-modality learning.
This connection paves the way for a privacy-preserving algorithm tailored for multimodal EHR feature representation learning.
arXiv Detail & Related papers (2024-03-22T03:01:42Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Specialty-Oriented Generalist Medical AI for Chest CT Screening [14.31187762890342]
We propose the first-of-its-kind medical multimodal-multitask foundation model (M3FM) with application in lung cancer screening and related tasks.
M3FM consistently outperforms the state-of-the-art single-modal task-specific models.
As a specialty-oriented generalist medical AI model, M3FM paves the way for similar breakthroughs in other areas of medicine.
arXiv Detail & Related papers (2023-04-03T20:19:56Z) - Multi-Modality Cardiac Image Computing: A Survey [18.92646939242613]
Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases.
Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management.
arXiv Detail & Related papers (2022-08-26T22:19:50Z) - Cardiac Complication Risk Profiling for Cancer Survivors via Multi-View
Multi-Task Learning [11.13058781411915]
Complication risk profiling is a key challenge in the healthcare domain due to the complex interaction between heterogeneous entities.
We propose a multi-view multi-task network (MuViTaNet) for predicting the onset of multiple complications.
arXiv Detail & Related papers (2021-09-25T05:00:55Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.