U-Harmony: Enhancing Joint Training for Segmentation Models with Universal Harmonization
- URL: http://arxiv.org/abs/2601.14605v1
- Date: Wed, 21 Jan 2026 02:43:39 GMT
- Title: U-Harmony: Enhancing Joint Training for Segmentation Models with Universal Harmonization
- Authors: Weiwei Ma, Xiaobing Yu, Peijie Qiu, Jin Yang, Pan Xiao, Xiaoqi Zhao, Xiaofeng Liu, Tomo Miyazaki, Shinichiro Omachi, Yongsong Huang,
- Abstract summary: We propose a joint training method called Universal Harmonization (U-Harmony), which can be integrated into deep learning-based architectures with a domain-gated head.<n>By integrating U-Harmony, our approach sequentially normalizes and then denormalizes feature distributions to mitigate domain-specific variations.<n>More appealingly, our framework also supports universal modality adaptation, allowing the seamless learning of new imaging modalities and anatomical classes.
- Score: 30.093279965784188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In clinical practice, medical segmentation datasets are often limited and heterogeneous, with variations in modalities, protocols, and anatomical targets across institutions. Existing deep learning models struggle to jointly learn from such diverse data, often sacrificing either generalization or domain-specific knowledge. To overcome these challenges, we propose a joint training method called Universal Harmonization (U-Harmony), which can be integrated into deep learning-based architectures with a domain-gated head, enabling a single segmentation model to learn from heterogeneous datasets simultaneously. By integrating U-Harmony, our approach sequentially normalizes and then denormalizes feature distributions to mitigate domain-specific variations while preserving original dataset-specific knowledge. More appealingly, our framework also supports universal modality adaptation, allowing the seamless learning of new imaging modalities and anatomical classes. Extensive experiments on cross-institutional brain lesion datasets demonstrate the effectiveness of our approach, establishing a new benchmark for robust and adaptable 3D medical image segmentation models in real-world clinical settings.
Related papers
- Federated Learning for Cross-Modality Medical Image Segmentation via Augmentation-Driven Generalization [0.0]
In this work, we consider a realistic FL scenario where each client holds single-modality data (CT or MRI)<n>We evaluate convolution-based spatial augmentation, frequency-domain manipulation, domain-specific normalization, and global intensity nonlinear (GIN) augmentation.<n>Our federated approach achieves 93-98% of centralized training accuracy, demonstrating strong cross-modality generalization without compromising data privacy.
arXiv Detail & Related papers (2026-02-24T11:13:01Z) - scMRDR: A scalable and flexible framework for unpaired single-cell multi-omics data integration [53.683726781791385]
We introduce a scalable and flexible generative framework called single-cell Multi-omics Regularized Disentangled Representations (scMRDR) for unpaired multi-omics integration.<n>Our method achieves excellent performance on benchmark datasets in terms of batch correction, modality alignment, and biological signal preservation.
arXiv Detail & Related papers (2025-10-28T21:28:39Z) - Adversarial Versus Federated: An Adversarial Learning based Multi-Modality Cross-Domain Federated Medical Segmentation [30.99222543580891]
Federated learning enables collaborative training of machine learning models among different clients.<n>We propose a new Federated Domain Adaptation (FedDA) segmentation training framework.<n>Our proposed FedDA substantially achieves cross-domain federated aggregation, endowing single modality client with cross-modality processing capabilities.
arXiv Detail & Related papers (2025-09-28T14:26:04Z) - UNICON: UNIfied CONtinual Learning for Medical Foundational Models [0.8672882547905405]
In medical imaging, the scarcity of data makes pre-training for every domain, modality, or task challenging.<n>Continual learning offers a solution by fine-tuning a model sequentially on different domains or tasks.<n>We propose UNIfied CONtinual Learning for Medical Foundational Models (UNICON), a framework that enables seamless adaptation of foundation models.
arXiv Detail & Related papers (2025-08-19T17:31:32Z) - Semantic Alignment of Unimodal Medical Text and Vision Representations [1.8848810602776873]
General-purpose AI models can exhibit similar latent spaces when processing semantically related data.<n>We show how semantic alignment can bridge general-purpose AI with specialised medical knowledge.<n>We introduce a novel zero-shot classification approach for unimodal vision encoders that leverages semantic alignment across modalities.
arXiv Detail & Related papers (2025-03-06T14:28:17Z) - LoRKD: Low-Rank Knowledge Decomposition for Medical Foundation Models [59.961172635689664]
"Knowledge Decomposition" aims to improve the performance on specific medical tasks.
We propose a novel framework named Low-Rank Knowledge Decomposition (LoRKD)
LoRKD explicitly separates gradients from different tasks by incorporating low-rank expert modules and efficient knowledge separation convolution.
arXiv Detail & Related papers (2024-09-29T03:56:21Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - FedDG: Federated Domain Generalization on Medical Image Segmentation via
Episodic Learning in Continuous Frequency Space [63.43592895652803]
Federated learning allows distributed medical institutions to collaboratively learn a shared prediction model with privacy protection.
While at clinical deployment, the models trained in federated learning can still suffer from performance drop when applied to completely unseen hospitals outside the federation.
We present a novel approach, named as Episodic Learning in Continuous Frequency Space (ELCFS), for this problem.
The effectiveness of our method is demonstrated with superior performance over state-of-the-arts and in-depth ablation experiments on two medical image segmentation tasks.
arXiv Detail & Related papers (2021-03-10T13:05:23Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Multi-site fMRI Analysis Using Privacy-preserving Federated Learning and
Domain Adaptation: ABIDE Results [13.615292855384729]
To train a high-quality deep learning model, the aggregation of a significant amount of patient information is required.
Due to the need to protect the privacy of patient data, it is hard to assemble a central database from multiple institutions.
Federated learning allows for population-level models to be trained without centralizing entities' data.
arXiv Detail & Related papers (2020-01-16T04:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.