UNICON: UNIfied CONtinual Learning for Medical Foundational Models
- URL: http://arxiv.org/abs/2508.14024v1
- Date: Tue, 19 Aug 2025 17:31:32 GMT
- Title: UNICON: UNIfied CONtinual Learning for Medical Foundational Models
- Authors: Mohammad Areeb Qazi, Munachiso S Nwadike, Ibrahim Almakky, Mohammad Yaqub, Numan Saeed,
- Abstract summary: In medical imaging, the scarcity of data makes pre-training for every domain, modality, or task challenging.<n>Continual learning offers a solution by fine-tuning a model sequentially on different domains or tasks.<n>We propose UNIfied CONtinual Learning for Medical Foundational Models (UNICON), a framework that enables seamless adaptation of foundation models.
- Score: 0.8672882547905405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundational models are trained on extensive datasets to capture the general trends of a domain. However, in medical imaging, the scarcity of data makes pre-training for every domain, modality, or task challenging. Continual learning offers a solution by fine-tuning a model sequentially on different domains or tasks, enabling it to integrate new knowledge without requiring large datasets for each training phase. In this paper, we propose UNIfied CONtinual Learning for Medical Foundational Models (UNICON), a framework that enables the seamless adaptation of foundation models to diverse domains, tasks, and modalities. Unlike conventional adaptation methods that treat these changes in isolation, UNICON provides a unified, perpetually expandable framework. Through careful integration, we show that foundation models can dynamically expand across imaging modalities, anatomical regions, and clinical objectives without catastrophic forgetting or task interference. Empirically, we validate our approach by adapting a chest CT foundation model initially trained for classification to a prognosis and segmentation task. Our results show improved performance across both additional tasks. Furthermore, we continually incorporated PET scans and achieved a 5\% improvement in Dice score compared to respective baselines. These findings establish that foundation models are not inherently constrained to their initial training scope but can evolve, paving the way toward generalist AI models for medical imaging.
Related papers
- U-Harmony: Enhancing Joint Training for Segmentation Models with Universal Harmonization [30.093279965784188]
We propose a joint training method called Universal Harmonization (U-Harmony), which can be integrated into deep learning-based architectures with a domain-gated head.<n>By integrating U-Harmony, our approach sequentially normalizes and then denormalizes feature distributions to mitigate domain-specific variations.<n>More appealingly, our framework also supports universal modality adaptation, allowing the seamless learning of new imaging modalities and anatomical classes.
arXiv Detail & Related papers (2026-01-21T02:43:39Z) - MAFM^3: Modular Adaptation of Foundation Models for Multi-Modal Medical AI [3.1920084309415007]
We propose MAFM3, a framework that enables a single foundation model to expand into diverse domains, tasks, and modalities.<n>Unlike conventional adaptation methods that treat each new task or modality in isolation, MAFM3 provides a unified and expandable framework for efficient multitask and multimodality adaptation.
arXiv Detail & Related papers (2025-11-14T12:10:59Z) - Federated Foundation Model for GI Endoscopy Images [7.9528382609447545]
Foundation models offer a promising solution by learning general-purpose representations, which can be finetuned for specific tasks.<n>Foundation model training typically requires extensive datasets, and while hospitals generate large volumes of data, privacy restrictions prevent direct data sharing.<n>We propose a FL framework for training foundation models for gastroendoscopy imaging, enabling data to remain within local hospital environments while contributing to a shared model.
arXiv Detail & Related papers (2025-05-30T01:18:17Z) - Steady Progress Beats Stagnation: Mutual Aid of Foundation and Conventional Models in Mixed Domain Semi-Supervised Medical Image Segmentation [36.07607318734544]
We introduce a Synergistic training framework for Foundation and Conventional models (SynFoC)<n>We observe that a conventional model trained from scratch has the ability to correct the high-confidence mispredictions of the foundation model.<n>We demonstrate the superiority of our method across four public multi-domain datasets.
arXiv Detail & Related papers (2025-03-21T10:03:32Z) - Med-LEGO: Editing and Adapting toward Generalist Medical Image Diagnosis [17.10843389390131]
Med-LEGO is a training-free framework that enables the seamless integration or updating of a generalist CAD model.<n>Our experiments demonstrate that Med-LEGO outperforms existing methods in both cross-domain and in-domain medical tasks.
arXiv Detail & Related papers (2025-03-03T04:27:11Z) - Continually Evolved Multimodal Foundation Models for Cancer Prognosis [50.43145292874533]
Cancer prognosis is a critical task that involves predicting patient outcomes and survival rates.<n>Previous studies have integrated diverse data modalities, such as clinical notes, medical images, and genomic data, leveraging their complementary information.<n>Existing approaches face two major limitations. First, they struggle to incorporate newly arrived data with varying distributions into training, such as patient records from different hospitals.<n>Second, most multimodal integration methods rely on simplistic concatenation or task-specific pipelines, which fail to capture the complex interdependencies across modalities.
arXiv Detail & Related papers (2025-01-30T06:49:57Z) - Efficient MedSAMs: Segment Anything in Medical Images on Laptop [69.28565867103542]
We organized the first international competition dedicated to promptable medical image segmentation.<n>The top teams developed lightweight segmentation foundation models and implemented an efficient inference pipeline.<n>The best-performing algorithms have been incorporated into the open-source software with a user-friendly interface to facilitate clinical adoption.
arXiv Detail & Related papers (2024-12-20T17:33:35Z) - LoRKD: Low-Rank Knowledge Decomposition for Medical Foundation Models [59.961172635689664]
"Knowledge Decomposition" aims to improve the performance on specific medical tasks.
We propose a novel framework named Low-Rank Knowledge Decomposition (LoRKD)
LoRKD explicitly separates gradients from different tasks by incorporating low-rank expert modules and efficient knowledge separation convolution.
arXiv Detail & Related papers (2024-09-29T03:56:21Z) - FEDKIM: Adaptive Federated Knowledge Injection into Medical Foundation Models [54.09244105445476]
This study introduces a novel knowledge injection approach, FedKIM, to scale the medical foundation model within a federated learning framework.<n>FedKIM leverages lightweight local models to extract healthcare knowledge from private data and integrates this knowledge into a centralized foundation model.<n>Our experiments across twelve tasks in seven modalities demonstrate the effectiveness of FedKIM in various settings.
arXiv Detail & Related papers (2024-08-17T15:42:29Z) - Incremental Learning for Heterogeneous Structure Segmentation in Brain
Tumor MRI [11.314017805825685]
We propose a divergence-aware dual-flow module with balanced rigidity and plasticity branches to decouple old and new tasks.
We evaluate our framework on a brain tumor segmentation task with continually changing target domains.
arXiv Detail & Related papers (2023-05-30T20:39:03Z) - Domain Generalization on Medical Imaging Classification using Episodic
Training with Task Augmentation [62.49837463676111]
We propose a novel scheme of episodic training with task augmentation on medical imaging classification.
Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting.
arXiv Detail & Related papers (2021-06-13T03:56:59Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.