DSFormer: A Dual-domain Self-supervised Transformer for Accelerated
Multi-contrast MRI Reconstruction
- URL: http://arxiv.org/abs/2201.10776v1
- Date: Wed, 26 Jan 2022 06:52:24 GMT
- Title: DSFormer: A Dual-domain Self-supervised Transformer for Accelerated
Multi-contrast MRI Reconstruction
- Authors: Bo Zhou, Jo Schlemper, Neel Dey, Seyed Sadegh Mohseni Salehi, Chi Liu,
James S. Duncan, Michal Sofka
- Abstract summary: Multi-contrast MRI (MC-MRI) captures multiple complementary imaging modalities.
Current deep accelerated MRI reconstruction networks focus on exploiting the redundancy between multiple contrasts.
We present a dual-domain self-supervised transformer (DSFormer) for accelerated MC-MRI reconstruction.
- Score: 15.49473622511862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-contrast MRI (MC-MRI) captures multiple complementary imaging
modalities to aid in radiological decision-making. Given the need for lowering
the time cost of multiple acquisitions, current deep accelerated MRI
reconstruction networks focus on exploiting the redundancy between multiple
contrasts. However, existing works are largely supervised with paired data
and/or prohibitively expensive fully-sampled MRI sequences. Further,
reconstruction networks typically rely on convolutional architectures which are
limited in their capacity to model long-range interactions and may lead to
suboptimal recovery of fine anatomical detail. To these ends, we present a
dual-domain self-supervised transformer (DSFormer) for accelerated MC-MRI
reconstruction. DSFormer develops a deep conditional cascade transformer (DCCT)
consisting of several cascaded Swin transformer reconstruction networks
(SwinRN) trained under two deep conditioning strategies to enable MC-MRI
information sharing. We further present a dual-domain (image and k-space)
self-supervised learning strategy for DCCT to alleviate the costs of acquiring
fully sampled training data. DSFormer generates high-fidelity reconstructions
which experimentally outperform current fully-supervised baselines. Moreover,
we find that DSFormer achieves nearly the same performance when trained either
with full supervision or with our proposed dual-domain self-supervision.
Related papers
- Fill the K-Space and Refine the Image: Prompting for Dynamic and
Multi-Contrast MRI Reconstruction [31.404228406642194]
The key to dynamic or multi-contrast magnetic resonance imaging (MRI) reconstruction lies in exploring inter-frame or inter-contrast information.
We propose a two-stage MRI reconstruction pipeline to address these limitations.
Our proposed method significantly outperforms previous state-of-the-art accelerated MRI reconstruction methods.
arXiv Detail & Related papers (2023-09-25T02:51:00Z) - Dual-Domain Self-Supervised Learning for Accelerated Non-Cartesian MRI
Reconstruction [14.754843942604472]
We present a fully self-supervised approach for accelerated non-Cartesian MRI reconstruction.
In training, the undersampled data are split into disjoint k-space domain partitions.
For the image-level self-supervision, we enforce appearance consistency obtained from the original undersampled data.
arXiv Detail & Related papers (2023-02-18T06:11:49Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial
Transformers [0.0]
We introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adrial TransformERs (SLATER)
A zero-shot reconstruction is performed on undersampled test data, where inference is performed by optimizing network parameters.
Experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against several state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-05-15T02:01:21Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.