HUMUS-Net: Hybrid unrolled multi-scale network architecture for
accelerated MRI reconstruction
- URL: http://arxiv.org/abs/2203.08213v1
- Date: Tue, 15 Mar 2022 19:26:29 GMT
- Title: HUMUS-Net: Hybrid unrolled multi-scale network architecture for
accelerated MRI reconstruction
- Authors: Zalan Fabian, Mahdi Soltanolkotabi
- Abstract summary: HUMUS-Net is a hybrid architecture that combines the beneficial implicit bias and efficiency of convolutions with the power of Transformer blocks in an unrolled and multi-scale network.
Our network establishes new state of the art on the largest publicly available MRI dataset, the fastMRI dataset.
- Score: 38.0542877099235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In accelerated MRI reconstruction, the anatomy of a patient is recovered from
a set of under-sampled and noisy measurements. Deep learning approaches have
been proven to be successful in solving this ill-posed inverse problem and are
capable of producing very high quality reconstructions. However, current
architectures heavily rely on convolutions, that are content-independent and
have difficulties modeling long-range dependencies in images. Recently,
Transformers, the workhorse of contemporary natural language processing, have
emerged as powerful building blocks for a multitude of vision tasks. These
models split input images into non-overlapping patches, embed the patches into
lower-dimensional tokens and utilize a self-attention mechanism that does not
suffer from the aforementioned weaknesses of convolutional architectures.
However, Transformers incur extremely high compute and memory cost when 1) the
input image resolution is high and 2) when the image needs to be split into a
large number of patches to preserve fine detail information, both of which are
typical in low-level vision problems such as MRI reconstruction, having a
compounding effect. To tackle these challenges, we propose HUMUS-Net, a hybrid
architecture that combines the beneficial implicit bias and efficiency of
convolutions with the power of Transformer blocks in an unrolled and
multi-scale network. HUMUS-Net extracts high-resolution features via
convolutional blocks and refines low-resolution features via a novel
Transformer-based multi-scale feature extractor. Features from both levels are
then synthesized into a high-resolution output reconstruction. Our network
establishes new state of the art on the largest publicly available MRI dataset,
the fastMRI dataset. We further demonstrate the performance of HUMUS-Net on two
other popular MRI datasets and perform fine-grained ablation studies to
validate our design.
Related papers
- VmambaIR: Visual State Space Model for Image Restoration [36.11385876754612]
We propose VmambaIR, which introduces State Space Models (SSMs) with linear complexity into comprehensive image restoration tasks.
VmambaIR achieves state-of-the-art (SOTA) performance with much fewer computational resources and parameters.
arXiv Detail & Related papers (2024-03-18T02:38:55Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Transformer and GAN Based Super-Resolution Reconstruction Network for
Medical Images [0.0]
Super-resolution reconstruction in medical imaging has become more popular (MRI)
In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN)
The integrated system can extract more precise texture information and focus more on important locations through global image matching.
arXiv Detail & Related papers (2022-12-26T09:52:12Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - Task Transformer Network for Joint MRI Reconstruction and
Super-Resolution [35.2868027332665]
We propose an end-to-end task transformer network (T$2$Net) for joint MRI reconstruction and super-resolution.
Our framework combines both reconstruction and super-resolution, divided into two sub-branches, whose features are expressed as queries and keys.
Experimental results show that our multi-task model significantly outperforms advanced sequential methods.
arXiv Detail & Related papers (2021-06-12T10:59:46Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.