Federated Pseudo Modality Generation for Incomplete Multi-Modal MRI
Reconstruction
- URL: http://arxiv.org/abs/2308.10910v1
- Date: Sun, 20 Aug 2023 03:38:59 GMT
- Title: Federated Pseudo Modality Generation for Incomplete Multi-Modal MRI
Reconstruction
- Authors: Yunlu Yan, Chun-Mei Feng, Yuexiang Li, Rick Siow Mong Goh, Lei Zhu
- Abstract summary: Fed-PMG is a novel communication-efficient federated learning framework.
We propose a pseudo modality generation mechanism to recover the missing modality for each single-modal client.
Our approach can effectively complete the missing modality within an acceptable communication cost.
- Score: 26.994070472726357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While multi-modal learning has been widely used for MRI reconstruction, it
relies on paired multi-modal data which is difficult to acquire in real
clinical scenarios. Especially in the federated setting, the common situation
is that several medical institutions only have single-modal data, termed the
modality missing issue. Therefore, it is infeasible to deploy a standard
federated learning framework in such conditions. In this paper, we propose a
novel communication-efficient federated learning framework, namely Fed-PMG, to
address the missing modality challenge in federated multi-modal MRI
reconstruction. Specifically, we utilize a pseudo modality generation mechanism
to recover the missing modality for each single-modal client by sharing the
distribution information of the amplitude spectrum in frequency space. However,
the step of sharing the original amplitude spectrum leads to heavy
communication costs. To reduce the communication cost, we introduce a
clustering scheme to project the set of amplitude spectrum into finite cluster
centroids, and share them among the clients. With such an elaborate design, our
approach can effectively complete the missing modality within an acceptable
communication cost. Extensive experiments demonstrate that our proposed method
can attain similar performance with the ideal scenario, i.e., all clients have
the full set of modalities. The source code will be released.
Related papers
- Accelerated Multi-Contrast MRI Reconstruction via Frequency and Spatial Mutual Learning [50.74383395813782]
We propose a novel Frequency and Spatial Mutual Learning Network (FSMNet) to explore global dependencies across different modalities.
The proposed FSMNet achieves state-of-the-art performance for the Multi-Contrast MR Reconstruction task with different acceleration factors.
arXiv Detail & Related papers (2024-09-21T12:02:47Z) - Federated Modality-specific Encoders and Multimodal Anchors for Personalized Brain Tumor Segmentation [29.584319651813754]
Federated modality-specific encoders and multimodal anchors (FedMEMA) are proposed.
FedMEMA employs an exclusive encoder for each modality to account for the inter-modal heterogeneity.
FedMEMA is validated on the BraTS 2020 benchmark for multimodal brain tumor segmentation.
arXiv Detail & Related papers (2024-03-18T14:02:53Z) - FedMM: Federated Multi-Modal Learning with Modality Heterogeneity in
Computational Pathology [3.802258033231335]
Federated Multi-Modal (FedMM) is a learning framework that trains multiple single-modal feature extractors to enhance subsequent classification performance.
FedMM notably outperforms two baselines in accuracy and AUC metrics.
arXiv Detail & Related papers (2024-02-24T16:58:42Z) - Learning Unseen Modality Interaction [54.23533023883659]
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences.
We pose the problem of unseen modality interaction and introduce a first solution.
It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved.
arXiv Detail & Related papers (2023-06-22T10:53:10Z) - Cross-Modal Vertical Federated Learning for MRI Reconstruction [42.527873703840996]
Federated learning enables multiple hospitals to cooperatively learn a shared model without privacy disclosure.
We develop a novel framework, namely Federated Consistent Regularization constrained Feature Disentanglement (Fed-CRFD), for boosting MRI reconstruction.
Our method can fully exploit the multi-source data from hospitals while alleviating the domain shift problem.
arXiv Detail & Related papers (2023-06-05T08:07:01Z) - Learning Federated Visual Prompt in Null Space for MRI Reconstruction [83.71117888610547]
We propose a new algorithm, FedPR, to learn federated visual prompts in the null space of global prompt for MRI reconstruction.
FedPR significantly outperforms state-of-the-art FL algorithms with 6% of communication costs when given the limited amount of local training data.
arXiv Detail & Related papers (2023-03-28T17:46:16Z) - NestedFormer: Nested Modality-Aware Transformer for Brain Tumor
Segmentation [29.157465321864265]
We propose a novel Nested Modality-Aware Transformer (NestedFormer) to explore the intra-modality and inter-modality relationships of multi-modal MRIs for brain tumor segmentation.
Built on the transformer-based multi-encoder and single-decoder structure, we perform nested multi-modal fusion for high-level representations of different modalities.
arXiv Detail & Related papers (2022-08-31T14:04:25Z) - A Novel Unified Conditional Score-based Generative Framework for
Multi-modal Medical Image Completion [54.512440195060584]
We propose the Unified Multi-Modal Conditional Score-based Generative Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM)
UMM-CSGM employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn a comprehensive set of cross-modal conditional distributions.
Experiments on BraTS19 dataset show that the UMM-CSGM can more reliably synthesize the heterogeneous enhancement and irregular area in tumor-induced lesions.
arXiv Detail & Related papers (2022-07-07T16:57:21Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.