Motion-Robust T2* Quantification from Gradient Echo MRI with Physics-Informed Deep Learning
- URL: http://arxiv.org/abs/2502.17209v1
- Date: Mon, 24 Feb 2025 14:41:53 GMT
- Title: Motion-Robust T2* Quantification from Gradient Echo MRI with Physics-Informed Deep Learning
- Authors: Hannah Eichhorn, Veronika Spieker, Kerstin Hammernik, Elisa Saks, Lina Felsner, Kilian Weiss, Christine Preibisch, Julia A. Schnabel,
- Abstract summary: Motion correction is crucial to obtain high-quality T2* maps.<n>We extend our previously introduced learning-based physics-informed motion correction method, PHIMO.
- Score: 2.6290476986603073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: T2* quantification from gradient echo magnetic resonance imaging is particularly affected by subject motion due to the high sensitivity to magnetic field inhomogeneities, which are influenced by motion and might cause signal loss. Thus, motion correction is crucial to obtain high-quality T2* maps. Methods: We extend our previously introduced learning-based physics-informed motion correction method, PHIMO, by utilizing acquisition knowledge to enhance the reconstruction performance for challenging motion patterns and increase PHIMO's robustness to varying strengths of magnetic field inhomogeneities across the brain. We perform comprehensive evaluations regarding motion detection accuracy and image quality for data with simulated and real motion. Results: Our extended version of PHIMO outperforms the learning-based baseline methods both qualitatively and quantitatively with respect to line detection and image quality. Moreover, PHIMO performs on-par with a conventional state-of-the-art motion correction method for T2* quantification from gradient echo MRI, which relies on redundant data acquisition. Conclusion: PHIMO's competitive motion correction performance, combined with a reduction in acquisition time by over 40% compared to the state-of-the-art method, make it a promising solution for motion-robust T2* quantification in research settings and clinical routine.
Related papers
- DIMA: DIffusing Motion Artifacts for unsupervised correction in brain MRI images [4.117232425638352]
DIMA (DIffusing Motion Artifacts) is a novel framework that leverages diffusion models to enable unsupervised motion artifact correction in brain MRI.
Our two-phase approach first trains a diffusion model on unpaired motion-affected images to learn the distribution of motion artifacts.
This model then generates realistic motion artifacts on clean images, creating paired datasets suitable for supervised training of correction networks.
arXiv Detail & Related papers (2025-04-09T10:43:38Z) - Highly efficient non-rigid registration in k-space with application to cardiac Magnetic Resonance Imaging [10.618048010632728]
We propose a novel self-supervised deep learning-based framework, dubbed the Local-All Pass Attention Network (LAPANet) for non-rigid motion estimation.
LAPANet was evaluated on cardiac motion estimation across various sampling trajectories and acceleration rates.
The achieved high temporal resolution (less than 5 ms) for non-rigid motion opens new avenues for motion detection, tracking and correction in dynamic and real-time MRI applications.
arXiv Detail & Related papers (2024-10-24T15:19:59Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - JSMoCo: Joint Coil Sensitivity and Motion Correction in Parallel MRI
with a Self-Calibrating Score-Based Diffusion Model [3.3053426917821134]
We propose to jointly estimate the motion parameters and coil sensitivity maps for under-sampled MRI reconstruction.
Our method is capable of reconstructing high-quality MRI images from sparsely-sampled k-space data, even affected by motion.
arXiv Detail & Related papers (2023-10-14T17:11:25Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - Estimating Head Motion from MR-Images [0.0]
Head motion is an omnipresent confounder of magnetic resonance image (MRI) analyses.
We introduce a deep learning method to predict in-scanner head motion directly from T1-weighted (T1w), T2-weighted (T2w) and fluid-attenuated inversion recovery (FLAIR) images.
arXiv Detail & Related papers (2023-02-28T11:03:08Z) - DRIMET: Deep Registration for 3D Incompressible Motion Estimation in
Tagged-MRI with Application to the Tongue [11.485843032637439]
Tagged magnetic resonance imaging(MRI) has been used for decades to observe and quantify the detailed motion of deforming tissue.
This paper presents a novel unsupervised phase-based 3D motion estimation technique for tagged MRI.
arXiv Detail & Related papers (2023-01-18T00:16:30Z) - Data and Physics Driven Learning Models for Fast MRI -- Fundamentals and
Methodologies from CNN, GAN to Attention and Transformers [72.047680167969]
This article aims to introduce the deep learning based data driven techniques for fast MRI including convolutional neural network and generative adversarial network based methods.
We will detail the research in coupling physics and data driven models for MRI acceleration.
Finally, we will demonstrate through a few clinical applications, explain the importance of data harmonisation and explainable models for such fast MRI techniques in multicentre and multi-scanner studies.
arXiv Detail & Related papers (2022-04-01T22:48:08Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.