Multi-Contrast MRI Super-Resolution via a Multi-Stage Integration
Network
- URL: http://arxiv.org/abs/2105.08949v1
- Date: Wed, 19 May 2021 06:47:31 GMT
- Title: Multi-Contrast MRI Super-Resolution via a Multi-Stage Integration
Network
- Authors: Chun-Mei Feng, Huazhu Fu, Shuhao Yuan, and Yong Xu
- Abstract summary: Super-resolution (SR) plays a crucial role in improving the image quality of magnetic resonance imaging (MRI)
MRI produces multi-contrast images and can provide a clear display of soft tissues.
In this work, we propose a multi-stage integration network (i.e., MINet) for multi-contrast MRI SR.
- Score: 31.591461062282384
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Super-resolution (SR) plays a crucial role in improving the image quality of
magnetic resonance imaging (MRI). MRI produces multi-contrast images and can
provide a clear display of soft tissues. However, current super-resolution
methods only employ a single contrast, or use a simple multi-contrast fusion
mechanism, ignoring the rich relations among different contrasts, which are
valuable for improving SR. In this work, we propose a multi-stage integration
network (i.e., MINet) for multi-contrast MRI SR, which explicitly models the
dependencies between multi-contrast images at different stages to guide image
SR. In particular, our MINet first learns a hierarchical feature representation
from multiple convolutional stages for each of different-contrast image.
Subsequently, we introduce a multi-stage integration module to mine the
comprehensive relations between the representations of the multi-contrast
images. Specifically, the module matches each representation with all other
features, which are integrated in terms of their similarities to obtain an
enriched representation. Extensive experiments on fastMRI and real-world
clinical datasets demonstrate that 1) our MINet outperforms state-of-the-art
multi-contrast SR methods in terms of various metrics and 2) our multi-stage
integration module is able to excavate complex interactions among
multi-contrast features at different stages, leading to improved target-image
quality.
Related papers
- Deep Unfolding Convolutional Dictionary Model for Multi-Contrast MRI
Super-resolution and Reconstruction [23.779641808300596]
We propose a multi-contrast convolutional dictionary (MC-CDic) model under the guidance of the optimization algorithm.
We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a deep CDic model.
Experimental results demonstrate the superior performance of the proposed MC-CDic model against existing SOTA methods.
arXiv Detail & Related papers (2023-09-03T13:18:59Z) - Dual Arbitrary Scale Super-Resolution for Multi-Contrast MRI [23.50915512118989]
Multi-contrast Super-Resolution (SR) reconstruction is promising to yield SR images with higher quality.
radiologists are accustomed to zooming the MR images at arbitrary scales rather than using a fixed scale.
We propose an implicit neural representations based dual-arbitrary multi-contrast MRI super-resolution method, called Dual-ArbNet.
arXiv Detail & Related papers (2023-07-05T14:43:26Z) - Compound Attention and Neighbor Matching Network for Multi-contrast MRI
Super-resolution [7.197850827700436]
Multi-contrast super-resolution of MRI can achieve better results than single-image super-resolution.
We propose a novel network architecture with compound-attention and neighbor matching (CANM-Net) for multi-contrast MRI SR.
CANM-Net outperforms state-of-the-art approaches in both retrospective and prospective experiments.
arXiv Detail & Related papers (2023-07-05T09:44:02Z) - JoJoNet: Joint-contrast and Joint-sampling-and-reconstruction Network
for Multi-contrast MRI [49.29851365978476]
The proposed framework consists of a sampling mask generator for each image contrast and a reconstructor exploiting the inter-contrast correlations with a recurrent structure.
The acceleration ratio of each image contrast is also learnable and can be driven by a downstream task performance.
arXiv Detail & Related papers (2022-10-22T20:46:56Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Exploring Separable Attention for Multi-Contrast MR Image
Super-Resolution [88.16655157395785]
We propose a separable attention network (comprising a priority attention and background separation attention) named SANet.
It can explore the foreground and background areas in the forward and reverse directions with the help of the auxiliary contrast.
It is the first model to explore a separable attention mechanism that uses the auxiliary contrast to predict the foreground and background regions.
arXiv Detail & Related papers (2021-09-03T05:53:07Z) - DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution [69.2432352477966]
Real image super-resolution(Real-SR) focus on the relationship between real-world high-resolution(HR) and low-resolution(LR) image.
In this article, we propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR.
Unlike conventional methods which stack up massive convolutional blocks for feature representation, we introduce a content-aware framework to study non-inherently aligned image pair.
arXiv Detail & Related papers (2020-02-25T18:24:51Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.