Cross-Modal Vertical Federated Learning for MRI Reconstruction
- URL: http://arxiv.org/abs/2306.02673v1
- Date: Mon, 5 Jun 2023 08:07:01 GMT
- Title: Cross-Modal Vertical Federated Learning for MRI Reconstruction
- Authors: Yunlu Yan, Hong Wang, Yawen Huang, Nanjun He, Lei Zhu, Yuexiang Li,
Yong Xu, Yefeng Zheng
- Abstract summary: Federated learning enables multiple hospitals to cooperatively learn a shared model without privacy disclosure.
We develop a novel framework, namely Federated Consistent Regularization constrained Feature Disentanglement (Fed-CRFD), for boosting MRI reconstruction.
Our method can fully exploit the multi-source data from hospitals while alleviating the domain shift problem.
- Score: 42.527873703840996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables multiple hospitals to cooperatively learn a shared
model without privacy disclosure. Existing methods often take a common
assumption that the data from different hospitals have the same modalities.
However, such a setting is difficult to fully satisfy in practical
applications, since the imaging guidelines may be different between hospitals,
which makes the number of individuals with the same set of modalities limited.
To this end, we formulate this practical-yet-challenging cross-modal vertical
federated learning task, in which shape data from multiple hospitals have
different modalities with a small amount of multi-modality data collected from
the same individuals. To tackle such a situation, we develop a novel framework,
namely Federated Consistent Regularization constrained Feature Disentanglement
(Fed-CRFD), for boosting MRI reconstruction by effectively exploring the
overlapping samples (individuals with multi-modalities) and solving the domain
shift problem caused by different modalities. Particularly, our Fed-CRFD
involves an intra-client feature disentangle scheme to decouple data into
modality-invariant and modality-specific features, where the modality-invariant
features are leveraged to mitigate the domain shift problem. In addition, a
cross-client latent representation consistency constraint is proposed
specifically for the overlapping samples to further align the
modality-invariant features extracted from different modalities. Hence, our
method can fully exploit the multi-source data from hospitals while alleviating
the domain shift problem. Extensive experiments on two typical MRI datasets
demonstrate that our network clearly outperforms state-of-the-art MRI
reconstruction methods. The source code will be publicly released upon the
publication of this work.
Related papers
- FedMM: Federated Multi-Modal Learning with Modality Heterogeneity in
Computational Pathology [3.802258033231335]
Federated Multi-Modal (FedMM) is a learning framework that trains multiple single-modal feature extractors to enhance subsequent classification performance.
FedMM notably outperforms two baselines in accuracy and AUC metrics.
arXiv Detail & Related papers (2024-02-24T16:58:42Z) - Multi-Modal Federated Learning for Cancer Staging over Non-IID Datasets with Unbalanced Modalities [9.476402318365446]
In this work, we introduce a novel FL architecture designed to accommodate not only the heterogeneity of data samples, but also the inherent heterogeneity/non-uniformity of data modalities across institutions.
We propose a solution by devising a distributed gradient blending and proximity-aware client weighting strategy tailored for multi-modal FL.
arXiv Detail & Related papers (2024-01-07T23:45:01Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Federated Pseudo Modality Generation for Incomplete Multi-Modal MRI
Reconstruction [26.994070472726357]
Fed-PMG is a novel communication-efficient federated learning framework.
We propose a pseudo modality generation mechanism to recover the missing modality for each single-modal client.
Our approach can effectively complete the missing modality within an acceptable communication cost.
arXiv Detail & Related papers (2023-08-20T03:38:59Z) - DISA: DIfferentiable Similarity Approximation for Universal Multimodal
Registration [39.44133108254786]
We propose a generic framework for creating expressive cross-modal descriptors.
We achieve this by approximating existing metrics with a dot-product in the feature space of a small convolutional neural network.
Our method is several orders of magnitude faster than local patch-based metrics and can be directly applied in clinical settings.
arXiv Detail & Related papers (2023-07-19T12:12:17Z) - FedDG: Federated Domain Generalization on Medical Image Segmentation via
Episodic Learning in Continuous Frequency Space [63.43592895652803]
Federated learning allows distributed medical institutions to collaboratively learn a shared prediction model with privacy protection.
While at clinical deployment, the models trained in federated learning can still suffer from performance drop when applied to completely unseen hospitals outside the federation.
We present a novel approach, named as Episodic Learning in Continuous Frequency Space (ELCFS), for this problem.
The effectiveness of our method is demonstrated with superior performance over state-of-the-arts and in-depth ablation experiments on two medical image segmentation tasks.
arXiv Detail & Related papers (2021-03-10T13:05:23Z) - Multi-institutional Collaborations for Improving Deep Learning-based
Magnetic Resonance Image Reconstruction Using Federated Learning [62.17532253489087]
Deep learning methods have been shown to produce superior performance on MR image reconstruction.
These methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.
We propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy.
arXiv Detail & Related papers (2021-03-03T03:04:40Z) - Cross-Domain Segmentation with Adversarial Loss and Covariate Shift for
Biomedical Imaging [2.1204495827342438]
This manuscript aims to implement a novel model that can learn robust representations from cross-domain data by encapsulating distinct and shared patterns from different modalities.
The tests on CT and MRI liver data acquired in routine clinical trials show that the proposed model outperforms all other baseline with a large margin.
arXiv Detail & Related papers (2020-06-08T07:35:55Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.