One Model to Unite Them All: Personalized Federated Learning of
Multi-Contrast MRI Synthesis
- URL: http://arxiv.org/abs/2207.06509v1
- Date: Wed, 13 Jul 2022 20:14:16 GMT
- Title: One Model to Unite Them All: Personalized Federated Learning of
Multi-Contrast MRI Synthesis
- Authors: Onat Dalmaz, Usama Mirza, G\"okberk Elmas, Muzaffer \"Ozbey, Salman UH
Dar, Emir Ceyani, Salman Avestimehr, Tolga \c{C}ukur
- Abstract summary: Learning-based MRI translation involves a synthesis model that maps a source-contrast onto a target-contrast image.
Here we introduce the first personalized FL method for MRI Synthesis (pFL Synth)
pFL Synth is based on an adversarial model equipped with a mapper that produces latents specific to individual sites and source-target contrasts.
- Score: 5.3963856146595095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based MRI translation involves a synthesis model that maps a
source-contrast onto a target-contrast image. Multi-institutional
collaborations are key to training synthesis models across broad datasets, yet
centralized training involves privacy risks. Federated learning (FL) is a
collaboration framework that instead adopts decentralized training to avoid
sharing imaging data and mitigate privacy concerns. However, FL-trained models
can be impaired by the inherent heterogeneity in the distribution of imaging
data. On the one hand, implicit shifts in image distribution are evident across
sites, even for a common translation task with fixed source-target
configuration. Conversely, explicit shifts arise within and across sites when
diverse translation tasks with varying source-target configurations are
prescribed. To improve reliability against domain shifts, here we introduce the
first personalized FL method for MRI Synthesis (pFLSynth). pFLSynth is based on
an adversarial model equipped with a mapper that produces latents specific to
individual sites and source-target contrasts. It leverages novel
personalization blocks that adaptively tune the statistics and weighting of
feature maps across the generator based on these latents. To further promote
site-specificity, partial model aggregation is employed over downstream layers
of the generator while upstream layers are retained locally. As such, pFLSynth
enables training of a unified synthesis model that can reliably generalize
across multiple sites and translation tasks. Comprehensive experiments on
multi-site datasets clearly demonstrate the enhanced performance of pFLSynth
against prior federated methods in multi-contrast MRI synthesis.
Related papers
- Generative Autoregressive Transformers for Model-Agnostic Federated MRI Reconstruction [5.519160766363227]
FedGAT is a model-agnostic FL technique based on generative autoregressive transformers.
It decentralizes the training of a global generative prior that captures the distribution of multi-site MR images.
It supports flexible collaborations while enjoying superior within-site and across-site reconstruction performance.
arXiv Detail & Related papers (2025-02-06T21:45:16Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - UniDiff: Advancing Vision-Language Models with Generative and
Discriminative Learning [86.91893533388628]
This paper presents UniDiff, a unified multi-modal model that integrates image-text contrastive learning (ITC), text-conditioned image synthesis learning (IS), and reciprocal semantic consistency modeling (RSC)
UniDiff demonstrates versatility in both multi-modal understanding and generative tasks.
arXiv Detail & Related papers (2023-06-01T15:39:38Z) - Personalized Federated Learning via Gradient Modulation for
Heterogeneous Text Summarization [21.825321314169642]
We propose a federated learning text summarization scheme, which allows users to share the global model in a cooperative learning manner without sharing raw data.
FedSUMM can achieve faster model convergence on PFL algorithm for task-specific text summarization.
arXiv Detail & Related papers (2023-04-23T03:18:46Z) - Tensor Decomposition based Personalized Federated Learning [12.420951968273574]
Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.
Due to FL's frequent communication and average aggregation strategy, they experience challenges scaling to statistical diversity data and large-scale models.
We propose a personalized FL framework, named Decomposition based Personalized learning (TDPFed), in which we design a novel tensorized local model with tensorized linear layers and convolutional layers to reduce the communication cost.
arXiv Detail & Related papers (2022-08-27T08:09:14Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Style-Hallucinated Dual Consistency Learning for Domain Generalized
Semantic Segmentation [117.3856882511919]
We propose the Style-HAllucinated Dual consistEncy learning (SHADE) framework to handle domain shift.
Our SHADE yields significant improvement and outperforms state-of-the-art methods by 5.07% and 8.35% on the average mIoU of three real-world datasets.
arXiv Detail & Related papers (2022-04-06T02:49:06Z) - Federated Learning of Generative Image Priors for MRI Reconstruction [5.3963856146595095]
Multi-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data.
We introduce a novel method for MRI reconstruction based on Federated learning of Generative IMage Priors (FedGIMP)
FedGIMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and subject-specific injection of the imaging operator.
arXiv Detail & Related papers (2022-02-08T22:17:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.