Deep Unfolding Network with Spatial Alignment for multi-modal MRI
reconstruction
- URL: http://arxiv.org/abs/2312.16998v1
- Date: Thu, 28 Dec 2023 13:02:16 GMT
- Title: Deep Unfolding Network with Spatial Alignment for multi-modal MRI
reconstruction
- Authors: Hao Zhang and Qi Wang and Jun Shi and Shihui Ying and Zhijie Wen
- Abstract summary: Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time.
To accelerate the whole acquisition process, MRI reconstruction of one modality from highly undersampled k-space data with another fully-sampled reference modality is an efficient solution.
Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations.
- Score: 17.41293135114323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic
information, but some modalities are limited by the long scanning time. To
accelerate the whole acquisition process, MRI reconstruction of one modality
from highly undersampled k-space data with another fully-sampled reference
modality is an efficient solution. However, the misalignment between
modalities, which is common in clinic practice, can negatively affect
reconstruction quality. Existing deep learning-based methods that account for
inter-modality misalignment perform better, but still share two main common
limitations: (1) The spatial alignment task is not adaptively integrated with
the reconstruction process, resulting in insufficient complementarity between
the two tasks; (2) the entire framework has weak interpretability. In this
paper, we construct a novel Deep Unfolding Network with Spatial Alignment,
termed DUN-SA, to appropriately embed the spatial alignment task into the
reconstruction process. Concretely, we derive a novel joint
alignment-reconstruction model with a specially designed cross-modal spatial
alignment term. By relaxing the model into cross-modal spatial alignment and
multi-modal reconstruction tasks, we propose an effective algorithm to solve
this model alternatively. Then, we unfold the iterative steps of the proposed
algorithm and design corresponding network modules to build DUN-SA with
interpretability. Through end-to-end training, we effectively compensate for
spatial misalignment using only reconstruction loss, and utilize the
progressively aligned reference modality to provide inter-modality prior to
improve the reconstruction of the target modality. Comprehensive experiments on
three real datasets demonstrate that our method exhibits superior
reconstruction performance compared to state-of-the-art methods.
Related papers
- Dual-Domain Multi-Contrast MRI Reconstruction with Synthesis-based
Fusion Network [8.721677700107639]
Our proposed framework, based on deep learning, facilitates the optimisation for under-sampled target contrast.
The method consists of three key steps: 1) Learning to synthesise data resembling the target contrast from the reference contrast; 2) Registering the multi-contrast data to reduce inter-scan motion; and 3) Utilising the registered data for reconstructing the target contrast.
Experiments demonstrate the superiority of our proposed framework, for up to an 8-fold acceleration rate, compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2023-12-01T15:40:26Z) - Fill the K-Space and Refine the Image: Prompting for Dynamic and
Multi-Contrast MRI Reconstruction [31.404228406642194]
The key to dynamic or multi-contrast magnetic resonance imaging (MRI) reconstruction lies in exploring inter-frame or inter-contrast information.
We propose a two-stage MRI reconstruction pipeline to address these limitations.
Our proposed method significantly outperforms previous state-of-the-art accelerated MRI reconstruction methods.
arXiv Detail & Related papers (2023-09-25T02:51:00Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Multi-Modal MRI Reconstruction with Spatial Alignment Network [51.74078260367654]
In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
arXiv Detail & Related papers (2021-08-12T08:46:35Z) - One Network to Solve Them All: A Sequential Multi-Task Joint Learning
Network Framework for MR Imaging Pipeline [12.684219884940056]
A sequential multi-task joint learning network model is proposed to train a combined end-to-end pipeline.
The proposed framework is verified on MRB dataset, which achieves superior performance on other SOTA methods in terms of both reconstruction and segmentation.
arXiv Detail & Related papers (2021-05-14T05:55:27Z) - Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted
Deep Learning [14.62432715967572]
We develop a re-weighted multi-task deep learning method to learn prior knowledge from the existing big dataset.
We then utilize them to assist simultaneous MR reconstruction and segmentation from the under-sampled k-space data.
Results show that the proposed method possesses encouraging capabilities for simultaneous and accurate MR reconstruction and segmentation.
arXiv Detail & Related papers (2020-11-27T09:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.