Multi-Modal MRI Reconstruction with Spatial Alignment Network
- URL: http://arxiv.org/abs/2108.05603v1
- Date: Thu, 12 Aug 2021 08:46:35 GMT
- Title: Multi-Modal MRI Reconstruction with Spatial Alignment Network
- Authors: Kai Xuan, Lei Xiang, Xiaoqian Huang, Lichi Zhang, Shu Liao, Dinggang
Shen, and Qian Wang
- Abstract summary: In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
- Score: 51.74078260367654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In clinical practice, magnetic resonance imaging (MRI) with multiple
contrasts is usually acquired in a single study to assess different properties
of the same region of interest in human body. The whole acquisition process can
be accelerated by having one or more modalities under-sampled in the k-space.
Recent researches demonstrate that, considering the redundancy between
different contrasts or modalities, a target MRI modality under-sampled in the
k-space can be better reconstructed with the helps from a fully-sampled
sequence (i.e., the reference modality). It implies that, in the same study of
the same subject, multiple sequences can be utilized together toward the
purpose of highly efficient multi-modal reconstruction. However, we find that
multi-modal reconstruction can be negatively affected by subtle spatial
misalignment between different sequences, which is actually common in clinical
practice. In this paper, we integrate the spatial alignment network with
reconstruction, to improve the quality of the reconstructed target modality.
Specifically, the spatial alignment network estimates the spatial misalignment
between the fully-sampled reference and the under-sampled target images, and
warps the reference image accordingly. Then, the aligned fully-sampled
reference image joins the under-sampled target image in the reconstruction
network, to produce the high-quality target image. Considering the contrast
difference between the target and the reference, we particularly design the
cross-modality-synthesis-based registration loss, in combination with the
reconstruction loss, to jointly train the spatial alignment network and the
reconstruction network. Our experiments on both clinical MRI and multi-coil
k-space raw data demonstrate the superiority and robustness of our spatial
alignment network. Code is publicly available at
https://github.com/woxuankai/SpatialAlignmentNetwork.
Related papers
- A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Attention Incorporated Network for Sharing Low-rank, Image and K-space Information during MR Image Reconstruction to Achieve Single Breath-hold Cardiac Cine Imaging [9.531827741901662]
We propose to embed information from multiple domains, including low-rank, image, and k-space, in a novel deep learning network for MRI reconstruction.
A-LIKNet adopts a parallel-branch structure, enabling independent learning in the k-space and image domain.
arXiv Detail & Related papers (2024-07-03T11:54:43Z) - DuDoUniNeXt: Dual-domain unified hybrid model for single and
multi-contrast undersampled MRI reconstruction [24.937435059755288]
We propose DuDoUniNeXt, a unified dual-domain MRI reconstruction network that can accommodate to scenarios involving absent, low-quality, and high-quality reference images.
Experimental results demonstrate that the proposed model surpasses state-of-the-art SC and MC models significantly.
arXiv Detail & Related papers (2024-03-08T12:26:48Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Attention Hybrid Variational Net for Accelerated MRI Reconstruction [7.046523233290946]
The application of compressed sensing (CS)-enabled data reconstruction for accelerating magnetic resonance imaging (MRI) remains a challenging problem.
This is due to the fact that the information lost in k-space from the acceleration mask makes it difficult to reconstruct an image similar to the quality of a fully sampled image.
We propose a deep learning-based attention hybrid variational network that performs learning in both the k-space and image domain.
arXiv Detail & Related papers (2023-06-21T16:19:07Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Joint Frequency and Image Space Learning for MRI Reconstruction and
Analysis [7.821429746599738]
We show that neural network layers that explicitly combine frequency and image feature representations can be used as a versatile building block for reconstruction from frequency space data.
The proposed joint learning schemes enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures at every layer of the network.
arXiv Detail & Related papers (2020-07-02T23:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.