Generalizing Supervised Deep Learning MRI Reconstruction to Multiple and
Unseen Contrasts using Meta-Learning Hypernetworks
- URL: http://arxiv.org/abs/2307.06771v1
- Date: Thu, 13 Jul 2023 14:22:59 GMT
- Title: Generalizing Supervised Deep Learning MRI Reconstruction to Multiple and
Unseen Contrasts using Meta-Learning Hypernetworks
- Authors: Sriprabha Ramanarayanan, Arun Palla, Keerthi Ram, Mohanasankar
Sivaprakasam
- Abstract summary: This work aims to develop a multimodal meta-learning model for image reconstruction.
Our proposed model has hypernetworks that evolve to generate mode-specific weights.
Experiments on MRI reconstruction show that our model exhibits superior reconstruction performance over joint training.
- Score: 1.376408511310322
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Meta-learning has recently been an emerging data-efficient learning technique
for various medical imaging operations and has helped advance contemporary deep
learning models. Furthermore, meta-learning enhances the knowledge
generalization of the imaging tasks by learning both shared and discriminative
weights for various configurations of imaging tasks. However, existing
meta-learning models attempt to learn a single set of weight initializations of
a neural network that might be restrictive for multimodal data. This work aims
to develop a multimodal meta-learning model for image reconstruction, which
augments meta-learning with evolutionary capabilities to encompass diverse
acquisition settings of multimodal data. Our proposed model called KM-MAML
(Kernel Modulation-based Multimodal Meta-Learning), has hypernetworks that
evolve to generate mode-specific weights. These weights provide the
mode-specific inductive bias for multiple modes by re-calibrating each kernel
of the base network for image reconstruction via a low-rank kernel modulation
operation. We incorporate gradient-based meta-learning (GBML) in the contextual
space to update the weights of the hypernetworks for different modes. The
hypernetworks and the reconstruction network in the GBML setting provide
discriminative mode-specific features and low-level image features,
respectively. Experiments on multi-contrast MRI reconstruction show that our
model, (i) exhibits superior reconstruction performance over joint training,
other meta-learning methods, and context-specific MRI reconstruction methods,
and (ii) better adaptation capabilities with improvement margins of 0.5 dB in
PSNR and 0.01 in SSIM. Besides, a representation analysis with U-Net shows that
kernel modulation infuses 80% of mode-specific representation changes in the
high-resolution layers. Our source code is available at
https://github.com/sriprabhar/KM-MAML/.
Related papers
- NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Enhancing CT Image synthesis from multi-modal MRI data based on a
multi-task neural network framework [16.864720020158906]
We propose a versatile multi-task neural network framework, based on an enhanced Transformer U-Net architecture.
We decompose the traditional problem of synthesizing CT images into distinct subtasks.
To enhance the framework's versatility in handling multi-modal data, we expand the model with multiple image channels.
arXiv Detail & Related papers (2023-12-13T18:22:38Z) - Deep Unfolding Convolutional Dictionary Model for Multi-Contrast MRI
Super-resolution and Reconstruction [23.779641808300596]
We propose a multi-contrast convolutional dictionary (MC-CDic) model under the guidance of the optimization algorithm.
We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a deep CDic model.
Experimental results demonstrate the superior performance of the proposed MC-CDic model against existing SOTA methods.
arXiv Detail & Related papers (2023-09-03T13:18:59Z) - Convolutional neural network based on sparse graph attention mechanism
for MRI super-resolution [0.34410212782758043]
Medical image super-resolution (SR) reconstruction using deep learning techniques can enhance lesion analysis and assist doctors in improving diagnostic efficiency and accuracy.
Existing deep learning-based SR methods rely on convolutional neural networks (CNNs), which inherently limit the expressive capabilities of these models.
We propose an A-network that utilizes multiple convolution operator feature extraction modules (MCO) for extracting image features.
arXiv Detail & Related papers (2023-05-29T06:14:22Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Deep Image Clustering with Contrastive Learning and Multi-scale Graph
Convolutional Networks [58.868899595936476]
This paper presents a new deep clustering approach termed image clustering with contrastive learning and multi-scale graph convolutional networks (IcicleGCN)
Experiments on multiple image datasets demonstrate the superior clustering performance of IcicleGCN over the state-of-the-art.
arXiv Detail & Related papers (2022-07-14T19:16:56Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Image-specific Convolutional Kernel Modulation for Single Image
Super-resolution [85.09413241502209]
In this issue, we propose a novel image-specific convolutional modulation kernel (IKM)
We exploit the global contextual information of image or feature to generate an attention weight for adaptively modulating the convolutional kernels.
Experiments on single image super-resolution show that the proposed methods achieve superior performances over state-of-the-art methods.
arXiv Detail & Related papers (2021-11-16T11:05:10Z) - Online Meta Adaptation for Variable-Rate Learned Image Compression [40.8361915315201]
This work addresses two major issues of end-to-end learned image compression (LIC) based on deep neural networks.
We introduce an online meta-learning (OML) setting for LIC, which combines ideas from meta learning and online learning in the conditional variational auto-encoder framework.
arXiv Detail & Related papers (2021-11-16T06:46:23Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.