IR2QSM: Quantitative Susceptibility Mapping via Deep Neural Networks with Iterative Reverse Concatenations and Recurrent Modules
- URL: http://arxiv.org/abs/2406.12300v1
- Date: Tue, 18 Jun 2024 06:17:45 GMT
- Title: IR2QSM: Quantitative Susceptibility Mapping via Deep Neural Networks with Iterative Reverse Concatenations and Recurrent Modules
- Authors: Min Li, Chen Chen, Zhuang Xiong, Ying Liu, Pengfei Rong, Shanshan Shan, Feng Liu, Hongfu Sun, Yang Gao,
- Abstract summary: We propose a novel deep learning-based IR2QSM method for QSM reconstruction.
It is designed by iterating four times of a reverse concatenations and middle recurrent modules enhanced U-net.
Simulated and in vivo experiments were conducted to compare IR2QSM with several traditional algorithms and state-of-the-art deep learning methods.
- Score: 14.228884847425011
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantitative susceptibility mapping (QSM) is an MRI phase-based post-processing technique to extract the distribution of tissue susceptibilities, demonstrating significant potential in studying neurological diseases. However, the ill-conditioned nature of dipole inversion makes QSM reconstruction from the tissue field prone to noise and artifacts. In this work, we propose a novel deep learning-based IR2QSM method for QSM reconstruction. It is designed by iterating four times of a reverse concatenations and middle recurrent modules enhanced U-net, which could dramatically improve the efficiency of latent feature utilization. Simulated and in vivo experiments were conducted to compare IR2QSM with several traditional algorithms (MEDI and iLSQR) and state-of-the-art deep learning methods (U-net, xQSM, and LPCNN). The results indicated that IR2QSM was able to obtain QSM images with significantly increased accuracy and mitigated artifacts over other methods. Particularly, IR2QSM demonstrated on average the best NRMSE (27.59%) in simulated experiments, which is 15.48%, 7.86%, 17.24%, 9.26%, and 29.13% lower than iLSQR, MEDI, U-net, xQSM, LPCNN, respectively, and led to improved QSM results with fewer artifacts for the in vivo data.
Related papers
- Affine Transformation Edited and Refined Deep Neural Network for
Quantitative Susceptibility Mapping [10.772763441035945]
We propose an end-to-end AFfine Transformation Edited and Refined (AFTER) deep neural network for Quantitative Susceptibility Mapping (QSM)
It is robust against arbitrary acquisition orientation and spatial resolution up to 0.6 mm isotropic at the finest.
arXiv Detail & Related papers (2022-11-25T07:54:26Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - MoG-QSM: Model-based Generative Adversarial Deep Learning Network for
Quantitative Susceptibility Mapping [10.898053030099023]
We propose a model-based framework that permeates benefits from generative adversarial networks to train a regularization term.
A residual network leveraging a mixture of least-squares (LS) GAN and the L1 cost was trained as the generator to learn the prior information.
MoG-QSM generates highly accurate susceptibility maps from single orientation phase maps.
arXiv Detail & Related papers (2021-01-21T02:52:05Z) - CycleQSM: Unsupervised QSM Deep Learning using Physics-Informed CycleGAN [23.80331349122883]
We propose a novel unsupervised QSM deep learning method using physics-informed cycleGAN.
In contrast to the conventional cycleGAN, our novel cycleGAN has only one generator and one discriminator thanks to the known dipole kernel.
Experimental results confirm that the proposed method provides more accurate QSM maps compared to the existing deep learning approaches.
arXiv Detail & Related papers (2020-12-07T16:46:15Z) - Learned Proximal Networks for Quantitative Susceptibility Mapping [9.061630971752464]
We present a Learned Proximal Convolutional Neural Network (LP-CNN) for solving the ill-posed QSM dipole inversion problem.
This framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements.
arXiv Detail & Related papers (2020-08-11T22:35:24Z) - Deep Learning Estimation of Multi-Tissue Constrained Spherical
Deconvolution with Limited Single Shell DW-MRI [2.903217519429591]
Deep learning can be used to estimate the information content captured by 8th order constrained spherical deconvolution (CSD)
We examine two network architectures: Sequential network of fully connected dense layers with a residual block in the middle (ResDNN), and Patch based convolutional neural network with a residual block (ResCNN)
The fiber orientation distribution function (fODF) can be recovered with high correlation as compared to the ground truth of MT-CST, which was derived from the multi-shell DW-MRI acquisitions.
arXiv Detail & Related papers (2020-02-20T15:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.