MedForge: Building Medical Foundation Models Like Open Source Software Development
- URL: http://arxiv.org/abs/2502.16055v1
- Date: Sat, 22 Feb 2025 03:19:39 GMT
- Title: MedForge: Building Medical Foundation Models Like Open Source Software Development
- Authors: Zheling Tan, Kexin Ding, Jin Gao, Mu Zhou, Dimitris Metaxas, Shaoting Zhang, Dequan Wang,
- Abstract summary: We propose Medical Foundation Models Merging (MedForge) to enable a community-driven medical foundation model development.<n>MedForge offers a bottom-up model construction mechanism by flexibly merging task-specific Low-Rank Adaptation (LoRA) modules.<n>Our major findings highlight the value of collaborative foundation models in advancing multi-center clinical collaboration effectively and cohesively.
- Score: 24.40523591421525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundational models (FMs) have made significant strides in the healthcare domain. Yet the data silo challenge and privacy concern remain in healthcare systems, hindering safe medical data sharing and collaborative model development among institutions. The collection and curation of scalable clinical datasets increasingly become the bottleneck for training strong FMs. In this study, we propose Medical Foundation Models Merging (MedForge), a cooperative framework enabling a community-driven medical foundation model development, meanwhile preventing the information leakage of raw patient data and mitigating synchronization model development issues across clinical institutions. MedForge offers a bottom-up model construction mechanism by flexibly merging task-specific Low-Rank Adaptation (LoRA) modules, which can adapt to downstream tasks while retaining original model parameters. Through an asynchronous LoRA module integration scheme, the resulting composite model can progressively enhance its comprehensive performance on various clinical tasks. MedForge shows strong performance on multiple clinical datasets (e.g., breast cancer, lung cancer, and colon cancer) collected from different institutions. Our major findings highlight the value of collaborative foundation models in advancing multi-center clinical collaboration effectively and cohesively. Our code is publicly available at https://github.com/TanZheling/MedForge.
Related papers
- Med-LEGO: Editing and Adapting toward Generalist Medical Image Diagnosis [17.10843389390131]
Med-LEGO is a training-free framework that enables the seamless integration or updating of a generalist CAD model.
Our experiments demonstrate that Med-LEGO outperforms existing methods in both cross-domain and in-domain medical tasks.
arXiv Detail & Related papers (2025-03-03T04:27:11Z) - Continually Evolved Multimodal Foundation Models for Cancer Prognosis [50.43145292874533]
Cancer prognosis is a critical task that involves predicting patient outcomes and survival rates.
Previous studies have integrated diverse data modalities, such as clinical notes, medical images, and genomic data, leveraging their complementary information.
Existing approaches face two major limitations. First, they struggle to incorporate newly arrived data with varying distributions into training, such as patient records from different hospitals.
Second, most multimodal integration methods rely on simplistic concatenation or task-specific pipelines, which fail to capture the complex interdependencies across modalities.
arXiv Detail & Related papers (2025-01-30T06:49:57Z) - MedCoDi-M: A Multi-Prompt Foundation Model for Multimodal Medical Data Generation [22.908801443059758]
We present MedCoDi-M, a model for multimodal medical data generation.<n>We benchmark it against five competitors on the MIMIC-CXR dataset.<n>We assess the utility of MedCoDi-M in addressing key challenges in the medical field.
arXiv Detail & Related papers (2025-01-08T16:53:56Z) - FedMetaMed: Federated Meta-Learning for Personalized Medication in Distributed Healthcare Systems [7.32609591220333]
We introduce Federated Meta-Learning for Personalized Medication (FedMetaMed)<n>FedMetaMed combines federated learning and meta-learning to create models that adapt to diverse patient data across healthcare systems.<n>We show that FedMetaMed outperforms state-of-the-art FL methods, showing superior generalization even on out-of-the-art cohorts.
arXiv Detail & Related papers (2024-12-05T03:36:55Z) - Towards Foundation Models for Critical Care Time Series [38.09906416210531]
We introduce a harmonized dataset for sequence modeling and transfer learning research, representing the first large-scale collection to include core treatment variables.
Future plans involve expanding this dataset to support further advancements in transfer learning and the development of scalable, generalizable models for critical healthcare applications.
arXiv Detail & Related papers (2024-11-25T12:49:55Z) - FEDKIM: Adaptive Federated Knowledge Injection into Medical Foundation Models [54.09244105445476]
This study introduces a novel knowledge injection approach, FedKIM, to scale the medical foundation model within a federated learning framework.
FedKIM leverages lightweight local models to extract healthcare knowledge from private data and integrates this knowledge into a centralized foundation model.
Our experiments across twelve tasks in seven modalities demonstrate the effectiveness of FedKIM in various settings.
arXiv Detail & Related papers (2024-08-17T15:42:29Z) - FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge Injection [83.54960238236548]
FEDMEKI not only preserves data privacy but also enhances the capability of medical foundation models.
FEDMEKI allows medical foundation models to learn from a broader spectrum of medical knowledge without direct data exposure.
arXiv Detail & Related papers (2024-08-17T15:18:56Z) - OpenMEDLab: An Open-source Platform for Multi-modality Foundation Models
in Medicine [55.29668193415034]
We present OpenMEDLab, an open-source platform for multi-modality foundation models.
It encapsulates solutions of pioneering attempts in prompting and fine-tuning large language and vision models for frontline clinical and bioinformatic applications.
It opens access to a group of pre-trained foundation models for various medical image modalities, clinical text, protein engineering, etc.
arXiv Detail & Related papers (2024-02-28T03:51:02Z) - A Distributed Privacy Preserving Model for the Detection of Alzheimer's Disease [0.0]
This paper introduces a HIPAA compliant framework that can train from distributed data.
I then propose a multimodal vertical federated model for Alzheimer's Disease (AD) detection.
The VFL architecture proposed herein offers a novel distributed architecture, enabling collaborative learning across diverse sources of medical data.
arXiv Detail & Related papers (2023-12-15T22:09:04Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.