Federated Learning of Generative Image Priors for MRI Reconstruction
- URL: http://arxiv.org/abs/2202.04175v1
- Date: Tue, 8 Feb 2022 22:17:57 GMT
- Title: Federated Learning of Generative Image Priors for MRI Reconstruction
- Authors: Gokberk Elmas, Salman UH Dar, Yilmaz Korkmaz, Emir Ceyani, Burak
Susam, Muzaffer \"Ozbey, Salman Avestimehr, Tolga \c{C}ukur
- Abstract summary: Multi-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data.
We introduce a novel method for MRI reconstruction based on Federated learning of Generative IMage Priors (FedGIMP)
FedGIMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and subject-specific injection of the imaging operator.
- Score: 5.3963856146595095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-institutional efforts can facilitate training of deep MRI
reconstruction models, albeit privacy risks arise during cross-site sharing of
imaging data. Federated learning (FL) has recently been introduced to address
privacy concerns by enabling distributed training without transfer of imaging
data. Existing FL methods for MRI reconstruction employ conditional models to
map from undersampled to fully-sampled acquisitions via explicit knowledge of
the imaging operator. Since conditional models generalize poorly across
different acceleration rates or sampling densities, imaging operators must be
fixed between training and testing, and they are typically matched across
sites. To improve generalization and flexibility in multi-institutional
collaborations, here we introduce a novel method for MRI reconstruction based
on Federated learning of Generative IMage Priors (FedGIMP). FedGIMP leverages a
two-stage approach: cross-site learning of a generative MRI prior, and
subject-specific injection of the imaging operator. The global MRI prior is
learned via an unconditional adversarial model that synthesizes high-quality MR
images based on latent variables. Specificity in the prior is preserved via a
mapper subnetwork that produces site-specific latents. During inference, the
prior is combined with subject-specific imaging operators to enable
reconstruction, and further adapted to individual test samples by minimizing
data-consistency loss. Comprehensive experiments on multi-institutional
datasets clearly demonstrate enhanced generalization performance of FedGIMP
against site-specific and federated methods based on conditional models, as
well as traditional reconstruction methods.
Related papers
- Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction [3.9681863841849623]
We build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them.
Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process.
Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information.
arXiv Detail & Related papers (2024-05-09T05:51:33Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Iterative Data Refinement for Self-Supervised MR Image Reconstruction [18.02961646651716]
We propose a data refinement framework for self-supervised MR image reconstruction.
We first analyze the reason of the performance gap between self-supervised and supervised methods.
Then, we design an effective self-supervised training data refinement method to reduce this data bias.
arXiv Detail & Related papers (2022-11-24T06:57:16Z) - Stable Deep MRI Reconstruction using Generative Priors [13.400444194036101]
We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only.
The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods.
arXiv Detail & Related papers (2022-10-25T08:34:29Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial
Transformers [0.0]
We introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adrial TransformERs (SLATER)
A zero-shot reconstruction is performed on undersampled test data, where inference is performed by optimizing network parameters.
Experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against several state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-05-15T02:01:21Z) - Multi-institutional Collaborations for Improving Deep Learning-based
Magnetic Resonance Image Reconstruction Using Federated Learning [62.17532253489087]
Deep learning methods have been shown to produce superior performance on MR image reconstruction.
These methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.
We propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy.
arXiv Detail & Related papers (2021-03-03T03:04:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.