Task Transformer Network for Joint MRI Reconstruction and
Super-Resolution
- URL: http://arxiv.org/abs/2106.06742v1
- Date: Sat, 12 Jun 2021 10:59:46 GMT
- Title: Task Transformer Network for Joint MRI Reconstruction and
Super-Resolution
- Authors: Chun-Mei Feng, Yunlu Yan, Huazhu Fu, Li Chen, and Yong Xu
- Abstract summary: We propose an end-to-end task transformer network (T$2$Net) for joint MRI reconstruction and super-resolution.
Our framework combines both reconstruction and super-resolution, divided into two sub-branches, whose features are expressed as queries and keys.
Experimental results show that our multi-task model significantly outperforms advanced sequential methods.
- Score: 35.2868027332665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The core problem of Magnetic Resonance Imaging (MRI) is the trade off between
acceleration and image quality. Image reconstruction and super-resolution are
two crucial techniques in Magnetic Resonance Imaging (MRI). Current methods are
designed to perform these tasks separately, ignoring the correlations between
them. In this work, we propose an end-to-end task transformer network
(T$^2$Net) for joint MRI reconstruction and super-resolution, which allows
representations and feature transmission to be shared between multiple task to
achieve higher-quality, super-resolved and motion-artifacts-free images from
highly undersampled and degenerated MRI data. Our framework combines both
reconstruction and super-resolution, divided into two sub-branches, whose
features are expressed as queries and keys. Specifically, we encourage joint
feature learning between the two tasks, thereby transferring accurate task
information. We first use two separate CNN branches to extract task-specific
features. Then, a task transformer module is designed to embed and synthesize
the relevance between the two tasks. Experimental results show that our
multi-task model significantly outperforms advanced sequential methods, both
quantitatively and qualitatively.
Related papers
- Dual Arbitrary Scale Super-Resolution for Multi-Contrast MRI [23.50915512118989]
Multi-contrast Super-Resolution (SR) reconstruction is promising to yield SR images with higher quality.
radiologists are accustomed to zooming the MR images at arbitrary scales rather than using a fixed scale.
We propose an implicit neural representations based dual-arbitrary multi-contrast MRI super-resolution method, called Dual-ArbNet.
arXiv Detail & Related papers (2023-07-05T14:43:26Z) - Compound Attention and Neighbor Matching Network for Multi-contrast MRI
Super-resolution [7.197850827700436]
Multi-contrast super-resolution of MRI can achieve better results than single-image super-resolution.
We propose a novel network architecture with compound-attention and neighbor matching (CANM-Net) for multi-contrast MRI SR.
CANM-Net outperforms state-of-the-art approaches in both retrospective and prospective experiments.
arXiv Detail & Related papers (2023-07-05T09:44:02Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - Flexible Alignment Super-Resolution Network for Multi-Contrast MRI [7.727046305845654]
Super-Resolution plays a crucial role in preprocessing the low-resolution images for more precise medical analysis.
We propose the Flexible Alignment Super-Resolution Network (FASR-Net) for multi-contrast magnetic resonance images Super-Resolution.
arXiv Detail & Related papers (2022-10-07T11:07:20Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - HUMUS-Net: Hybrid unrolled multi-scale network architecture for
accelerated MRI reconstruction [38.0542877099235]
HUMUS-Net is a hybrid architecture that combines the beneficial implicit bias and efficiency of convolutions with the power of Transformer blocks in an unrolled and multi-scale network.
Our network establishes new state of the art on the largest publicly available MRI dataset, the fastMRI dataset.
arXiv Detail & Related papers (2022-03-15T19:26:29Z) - Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution [50.10987776141901]
Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
We introduce an effective hybrid architecture for super-resolution (SR) tasks, which leverages local features from CNNs and long-range dependencies captured by transformers.
Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.
arXiv Detail & Related papers (2022-03-15T06:52:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.