Motion-compensated MR CINE reconstruction with reconstruction-driven motion estimation
- URL: http://arxiv.org/abs/2302.02504v2
- Date: Fri, 16 Aug 2024 12:24:54 GMT
- Title: Motion-compensated MR CINE reconstruction with reconstruction-driven motion estimation
- Authors: Jiazhen Pan, Wenqi Huang, Daniel Rueckert, Thomas Küstner, Kerstin Hammernik,
- Abstract summary: Motion-compensated MR reconstruction (MCMR) is an effective approach to address highly undersampled acquisitions.
We propose a novel perspective for addressing the MCMR problem and a more integrated and efficient solution to the MCMR field.
Our approach is unique in that the motion estimation is directly driven by the ultimate goal, reconstruction, but not by the canonical motion-warping loss.
- Score: 11.432602522235742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In cardiac CINE, motion-compensated MR reconstruction (MCMR) is an effective approach to address highly undersampled acquisitions by incorporating motion information between frames. In this work, we propose a novel perspective for addressing the MCMR problem and a more integrated and efficient solution to the MCMR field. Contrary to state-of-the-art (SOTA) MCMR methods which break the original problem into two sub-optimization problems, i.e. motion estimation and reconstruction, we formulate this problem as a single entity with one single optimization. Our approach is unique in that the motion estimation is directly driven by the ultimate goal, reconstruction, but not by the canonical motion-warping loss (similarity measurement between motion-warped images and target images). We align the objectives of motion estimation and reconstruction, eliminating the drawbacks of artifacts-affected motion estimation and therefore error-propagated reconstruction. Further, we can deliver high-quality reconstruction and realistic motion without applying any regularization/smoothness loss terms, circumventing the non-trivial weighting factor tuning. We evaluate our method on two datasets: 1) an in-house acquired 2D CINE dataset for the retrospective study and 2) the public OCMR cardiac dataset for the prospective study. The conducted experiments indicate that the proposed MCMR framework can deliver artifact-free motion estimation and high-quality MR images even for imaging accelerations up to 20x, outperforming SOTA non-MCMR and MCMR methods in both qualitative and quantitative evaluation across all experiments. The code is available at https://github.com/JZPeterPan/MCMR-Recon-Driven-Motion.
Related papers
- ReCoM: Realistic Co-Speech Motion Generation with Recurrent Embedded Transformer [58.49950218437718]
We present ReCoM, an efficient framework for generating high-fidelity and generalizable human body motions synchronized with speech.
The core innovation lies in the Recurrent Embedded Transformer (RET), which integrates Dynamic Embedding Regularization (DER) into a Vision Transformer (ViT) core architecture.
To enhance model robustness, we incorporate the proposed DER strategy, which equips the model with dual capabilities of noise resistance and cross-domain generalization.
arXiv Detail & Related papers (2025-03-27T16:39:40Z) - Moner: Motion Correction in Undersampled Radial MRI with Unsupervised Neural Representation [31.431342571006123]
Motion correction (MoCo) in radial MRI is a challenging problem due to the unpredictability of subject's motion.
We propose Moner, an unsupervised MoCo method that jointly solves artifact-free MR images and accurate motion from rigid motion-corrupted k-space data.
arXiv Detail & Related papers (2024-09-25T13:27:29Z) - Simultaneous Map and Object Reconstruction [66.66729715211642]
We present a method for dynamic surface reconstruction of large-scale urban scenes from LiDAR.
We take inspiration from recent novel view synthesis methods and pose the reconstruction problem as a global optimization.
By careful modeling of continuous-time motion, our reconstructions can compensate for the rolling shutter effects of rotating LiDAR sensors.
arXiv Detail & Related papers (2024-06-19T23:53:31Z) - Attention-aware non-rigid image registration for accelerated MR imaging [10.47044784972188]
We introduce an attention-aware deep learning-based framework that can perform non-rigid pairwise registration for fully sampled and accelerated MRI.
We extract local visual representations to build similarity maps between the registered image pairs at multiple resolution levels.
We demonstrate that our model derives reliable and consistent motion fields across different sampling trajectories.
arXiv Detail & Related papers (2024-04-26T14:25:07Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - Fill the K-Space and Refine the Image: Prompting for Dynamic and
Multi-Contrast MRI Reconstruction [31.404228406642194]
The key to dynamic or multi-contrast magnetic resonance imaging (MRI) reconstruction lies in exploring inter-frame or inter-contrast information.
We propose a two-stage MRI reconstruction pipeline to address these limitations.
Our proposed method significantly outperforms previous state-of-the-art accelerated MRI reconstruction methods.
arXiv Detail & Related papers (2023-09-25T02:51:00Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - Learning-based and unrolled motion-compensated reconstruction for
cardiac MR CINE imaging [8.095696087978977]
Motion-compensated MR reconstruction (MCMR) is a powerful concept with considerable potential.
We propose a learning-based self-supervised framework for MCMR to efficiently deal with non-rigid motion corruption in cardiac MR imaging.
arXiv Detail & Related papers (2022-09-08T09:34:12Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Motion Correction and Volumetric Reconstruction for Fetal Functional
Magnetic Resonance Imaging Data [3.690756997172894]
Motion correction is an essential preprocessing step in functional Magnetic Resonance Imaging (fMRI) of the fetal brain.
Current motion correction approaches for fetal fMRI choose a single 3D volume from a specific acquisition timepoint.
We propose a novel framework, which estimates a high-resolution reference volume by using outlier-robust motion correction.
arXiv Detail & Related papers (2022-02-11T19:11:16Z) - ReconFormer: Accelerated MRI Reconstruction Using Recurrent Transformer [60.27951773998535]
We propose a recurrent transformer model, namely textbfReconFormer, for MRI reconstruction.
It can iteratively reconstruct high fertility magnetic resonance images from highly under-sampled k-space data.
We show that it achieves significant improvements over the state-of-the-art methods with better parameter efficiency.
arXiv Detail & Related papers (2022-01-23T21:58:19Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.