Multi-modal Aggregation Network for Fast MR Imaging
- URL: http://arxiv.org/abs/2110.08080v1
- Date: Fri, 15 Oct 2021 13:16:59 GMT
- Title: Multi-modal Aggregation Network for Fast MR Imaging
- Authors: Chun-Mei Feng and Huazhu Fe and Tianfei Zhou and Yong Xu and Ling Shao
and David Zhang
- Abstract summary: We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
- Score: 85.25000133194762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic resonance (MR) imaging is a commonly used scanning technique for
disease detection, diagnosis and treatment monitoring. Although it is able to
produce detailed images of organs and tissues with better contrast, it suffers
from a long acquisition time, which makes the image quality vulnerable to say
motion artifacts. Recently, many approaches have been developed to reconstruct
full-sampled images from partially observed measurements in order to accelerate
MR imaging. However, most of these efforts focus on reconstruction over a
single modality or simple fusion of multiple modalities, neglecting the
discovery of correlation knowledge at different feature level. In this work, we
propose a novel Multi-modal Aggregation Network, named MANet, which is capable
of discovering complementary representations from a fully sampled auxiliary
modality, with which to hierarchically guide the reconstruction of a given
target modality. In our MANet, the representations from the fully sampled
auxiliary and undersampled target modalities are learned independently through
a specific network. Then, a guided attention module is introduced in each
convolutional stage to selectively aggregate multi-modal features for better
reconstruction, yielding comprehensive, multi-scale, multi-modal feature
fusion. Moreover, our MANet follows a hybrid domain learning framework, which
allows it to simultaneously recover the frequency signal in the $k$-space
domain as well as restore the image details from the image domain. Extensive
experiments demonstrate the superiority of the proposed approach over
state-of-the-art MR image reconstruction methods.
Related papers
- Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction [3.9681863841849623]
We build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them.
Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process.
Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information.
arXiv Detail & Related papers (2024-05-09T05:51:33Z) - Multi-task Magnetic Resonance Imaging Reconstruction using Meta-learning [3.083408283778178]
This paper proposes a meta-learning approach to efficiently learn image features from multiple MR image datasets.
Experiment results demonstrate the ability of our new meta-learning reconstruction method to successfully reconstruct highly-undersampled k-space data from multiple MRI datasets simultaneously.
arXiv Detail & Related papers (2024-03-29T04:02:51Z) - Correlated and Multi-frequency Diffusion Modeling for Highly
Under-sampled MRI Reconstruction [14.687337090732036]
Most existing MRI reconstruction methods perform tar-geted reconstruction of the entire MR image without considering specific tissue regions.
This may fail to emphasize the reconstruction accuracy on im-portant tissues for diagnosis.
In this study, leveraging a combination of the properties of k-space data and the diffusion process, our novel scheme focuses on mining the multi-frequency prior.
arXiv Detail & Related papers (2023-09-02T07:51:27Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - Multi-Modal MRI Reconstruction with Spatial Alignment Network [51.74078260367654]
In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
arXiv Detail & Related papers (2021-08-12T08:46:35Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Multi-institutional Collaborations for Improving Deep Learning-based
Magnetic Resonance Image Reconstruction Using Federated Learning [62.17532253489087]
Deep learning methods have been shown to produce superior performance on MR image reconstruction.
These methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.
We propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy.
arXiv Detail & Related papers (2021-03-03T03:04:40Z) - Robust Image Reconstruction with Misaligned Structural Information [0.27074235008521236]
We propose a variational framework which jointly performs reconstruction and registration.
Our approach is the first to achieve this for different modalities and outranks established approaches in terms of accuracy of both reconstruction and registration.
arXiv Detail & Related papers (2020-04-01T17:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.