Affine Transformation Edited and Refined Deep Neural Network for
Quantitative Susceptibility Mapping
- URL: http://arxiv.org/abs/2211.13942v1
- Date: Fri, 25 Nov 2022 07:54:26 GMT
- Title: Affine Transformation Edited and Refined Deep Neural Network for
Quantitative Susceptibility Mapping
- Authors: Zhuang Xiong, Yang Gao, Feng Liu, Hongfu Sun
- Abstract summary: We propose an end-to-end AFfine Transformation Edited and Refined (AFTER) deep neural network for Quantitative Susceptibility Mapping (QSM)
It is robust against arbitrary acquisition orientation and spatial resolution up to 0.6 mm isotropic at the finest.
- Score: 10.772763441035945
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Deep neural networks have demonstrated great potential in solving dipole
inversion for Quantitative Susceptibility Mapping (QSM). However, the
performances of most existing deep learning methods drastically degrade with
mismatched sequence parameters such as acquisition orientation and spatial
resolution. We propose an end-to-end AFfine Transformation Edited and Refined
(AFTER) deep neural network for QSM, which is robust against arbitrary
acquisition orientation and spatial resolution up to 0.6 mm isotropic at the
finest. The AFTER-QSM neural network starts with a forward affine
transformation layer, followed by an Unet for dipole inversion, then an inverse
affine transformation layer, followed by a Residual Dense Network (RDN) for QSM
refinement. Simulation and in-vivo experiments demonstrated that the proposed
AFTER-QSM network architecture had excellent generalizability. It can
successfully reconstruct susceptibility maps from highly oblique and
anisotropic scans, leading to the best image quality assessments in simulation
tests and suppressed streaking artifacts and noise levels for in-vivo
experiments compared with other methods. Furthermore, ablation studies showed
that the RDN refinement network significantly reduced image blurring and
susceptibility underestimation due to affine transformations. In addition, the
AFTER-QSM network substantially shortened the reconstruction time from minutes
using conventional methods to only a few seconds.
Related papers
- Deep Learning-based MRI Reconstruction with Artificial Fourier Transform (AFT)-Net [14.146848823672677]
We introduce a unified complex-valued deep learning framework-Artificial Fourier Transform Network (AFTNet)
AFTNet can be readily used to solve image inverse problems in domain transformation.
We show that AFTNet achieves superior accelerated MRI reconstruction compared to existing approaches.
arXiv Detail & Related papers (2023-12-18T02:50:45Z) - Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - Reparameterization through Spatial Gradient Scaling [69.27487006953852]
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
We present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional networks.
arXiv Detail & Related papers (2023-03-05T17:57:33Z) - JSRNN: Joint Sampling and Reconstruction Neural Networks for High
Quality Image Compressed Sensing [8.902545322578925]
Two sub-networks, which are the sampling sub-network and the reconstruction sub-network, are included in the proposed framework.
In the reconstruction sub-network, a cascade network combining stacked denoising autoencoder (SDA) and convolutional neural network (CNN) is designed to reconstruct signals.
This framework outperforms many other state-of-the-art methods, especially at low sampling rates.
arXiv Detail & Related papers (2022-11-11T02:20:30Z) - T$^2$LR-Net: An Unrolling Reconstruction Network Learning Transformed
Tensor Low-Rank prior for Dynamic MR Imaging [6.101233798770526]
We introduce a flexible model based on TTNN with the ability to exploit the tensor low-rank prior of a transformed domain.
We also introduce a model-based deep unrolling reconstruction network to learn the transformed tensor low-rank prior.
The proposed framework can provide improved recovery results compared with the state-of-the-art optimization-based and unrolling network-based methods.
arXiv Detail & Related papers (2022-09-08T14:11:02Z) - Backward Gradient Normalization in Deep Neural Networks [68.8204255655161]
We introduce a new technique for gradient normalization during neural network training.
The gradients are rescaled during the backward pass using normalization layers introduced at certain points within the network architecture.
Results on tests with very deep neural networks show that the new technique can do an effective control of the gradient norm.
arXiv Detail & Related papers (2021-06-17T13:24:43Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Learned Proximal Networks for Quantitative Susceptibility Mapping [9.061630971752464]
We present a Learned Proximal Convolutional Neural Network (LP-CNN) for solving the ill-posed QSM dipole inversion problem.
This framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements.
arXiv Detail & Related papers (2020-08-11T22:35:24Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Unsupervised Adaptive Neural Network Regularization for Accelerated
Radial Cine MRI [3.6280929178575994]
We propose an iterative reconstruction scheme for 2D radial cine MRI based on ground truth-free unsupervised learning of shallow convolutional neural networks.
The network is trained to approximate patches of the current estimate of the solution during the reconstruction.
arXiv Detail & Related papers (2020-02-10T14:47:20Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.