Deep J-Sense: Accelerated MRI Reconstruction via Unrolled Alternating
Optimization
- URL: http://arxiv.org/abs/2103.02087v1
- Date: Tue, 2 Mar 2021 23:22:22 GMT
- Title: Deep J-Sense: Accelerated MRI Reconstruction via Unrolled Alternating
Optimization
- Authors: Marius Arvinte, Sriram Vishwanath, Ahmed H. Tewfik, and Jonathan I.
Tamir
- Abstract summary: We introduce Deep J-Sense as a deep learning approach that builds on unrolled alternating minimization.
Our algorithm refines both the magnetization (image) kernel and the coil sensitivity maps.
Experimental results on a subset of the knee fastMRI dataset show that this increases reconstruction performance.
- Score: 23.328386107496105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accelerated multi-coil magnetic resonance imaging reconstruction has seen a
substantial recent improvement combining compressed sensing with deep learning.
However, most of these methods rely on estimates of the coil sensitivity
profiles, or on calibration data for estimating model parameters. Prior work
has shown that these methods degrade in performance when the quality of these
estimators are poor or when the scan parameters differ from the training
conditions. Here we introduce Deep J-Sense as a deep learning approach that
builds on unrolled alternating minimization and increases robustness: our
algorithm refines both the magnetization (image) kernel and the coil
sensitivity maps. Experimental results on a subset of the knee fastMRI dataset
show that this increases reconstruction performance and provides a significant
degree of robustness to varying acceleration factors and calibration region
sizes.
Related papers
- Robust plug-and-play methods for highly accelerated non-Cartesian MRI reconstruction [2.724485028696543]
We propose a fully unsupervised preprocessing pipeline to generate clean, noiseless MRI signals from multicoil data.
When combined with preconditioning techniques, our approach achieves robust MRI reconstruction for high-quality data.
arXiv Detail & Related papers (2024-11-04T10:27:57Z) - Deep Multi-contrast Cardiac MRI Reconstruction via vSHARP with Auxiliary Refinement Network [7.043932618116216]
We propose a deep learning-based reconstruction method for 2D dynamic multi-contrast, multi-scheme, and multi-acceleration MRI.
Our approach integrates the state-of-the-art vSHARP model, which utilizes half-quadratic variable splitting and ADMM optimization.
arXiv Detail & Related papers (2024-11-02T15:59:35Z) - Inference Stage Denoising for Undersampled MRI Reconstruction [13.8086726938161]
Reconstruction of magnetic resonance imaging (MRI) data has been positively affected by deep learning.
A key challenge remains: to improve generalisation to distribution shifts between the training and testing data.
arXiv Detail & Related papers (2024-02-12T12:50:10Z) - JSMoCo: Joint Coil Sensitivity and Motion Correction in Parallel MRI
with a Self-Calibrating Score-Based Diffusion Model [3.3053426917821134]
We propose to jointly estimate the motion parameters and coil sensitivity maps for under-sampled MRI reconstruction.
Our method is capable of reconstructing high-quality MRI images from sparsely-sampled k-space data, even affected by motion.
arXiv Detail & Related papers (2023-10-14T17:11:25Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - ReconFormer: Accelerated MRI Reconstruction Using Recurrent Transformer [60.27951773998535]
We propose a recurrent transformer model, namely textbfReconFormer, for MRI reconstruction.
It can iteratively reconstruct high fertility magnetic resonance images from highly under-sampled k-space data.
We show that it achieves significant improvements over the state-of-the-art methods with better parameter efficiency.
arXiv Detail & Related papers (2022-01-23T21:58:19Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - Deep MRI Reconstruction with Radial Subsampling [2.7998963147546148]
Retrospectively applying a subsampling mask onto the k-space data is a way of simulating the accelerated acquisition of k-space data in real clinical setting.
We compare and provide a review for the effect of applying either rectilinear or radial retrospective subsampling on the quality of the reconstructions outputted by trained deep neural networks.
arXiv Detail & Related papers (2021-08-17T17:45:51Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.