Attention-aware non-rigid image registration for accelerated MR imaging
- URL: http://arxiv.org/abs/2404.17621v1
- Date: Fri, 26 Apr 2024 14:25:07 GMT
- Title: Attention-aware non-rigid image registration for accelerated MR imaging
- Authors: Aya Ghoul, Jiazhen Pan, Andreas Lingg, Jens Kübler, Patrick Krumm, Kerstin Hammernik, Daniel Rueckert, Sergios Gatidis, Thomas Küstner,
- Abstract summary: We introduce an attention-aware deep learning-based framework that can perform non-rigid pairwise registration for fully sampled and accelerated MRI.
We extract local visual representations to build similarity maps between the registered image pairs at multiple resolution levels.
We demonstrate that our model derives reliable and consistent motion fields across different sampling trajectories.
- Score: 10.47044784972188
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate motion estimation at high acceleration factors enables rapid motion-compensated reconstruction in Magnetic Resonance Imaging (MRI) without compromising the diagnostic image quality. In this work, we introduce an attention-aware deep learning-based framework that can perform non-rigid pairwise registration for fully sampled and accelerated MRI. We extract local visual representations to build similarity maps between the registered image pairs at multiple resolution levels and additionally leverage long-range contextual information using a transformer-based module to alleviate ambiguities in the presence of artifacts caused by undersampling. We combine local and global dependencies to perform simultaneous coarse and fine motion estimation. The proposed method was evaluated on in-house acquired fully sampled and accelerated data of 101 patients and 62 healthy subjects undergoing cardiac and thoracic MRI. The impact of motion estimation accuracy on the downstream task of motion-compensated reconstruction was analyzed. We demonstrate that our model derives reliable and consistent motion fields across different sampling trajectories (Cartesian and radial) and acceleration factors of up to 16x for cardiac motion and 30x for respiratory motion and achieves superior image quality in motion-compensated reconstruction qualitatively and quantitatively compared to conventional and recent deep learning-based approaches. The code is publicly available at https://github.com/lab-midas/GMARAFT.
Related papers
- Highly efficient non-rigid registration in k-space with application to cardiac Magnetic Resonance Imaging [10.618048010632728]
We propose a novel self-supervised deep learning-based framework, dubbed the Local-All Pass Attention Network (LAPANet) for non-rigid motion estimation.
LAPANet was evaluated on cardiac motion estimation across various sampling trajectories and acceleration rates.
The achieved high temporal resolution (less than 5 ms) for non-rigid motion opens new avenues for motion detection, tracking and correction in dynamic and real-time MRI applications.
arXiv Detail & Related papers (2024-10-24T15:19:59Z) - Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - JSMoCo: Joint Coil Sensitivity and Motion Correction in Parallel MRI
with a Self-Calibrating Score-Based Diffusion Model [3.3053426917821134]
We propose to jointly estimate the motion parameters and coil sensitivity maps for under-sampled MRI reconstruction.
Our method is capable of reconstructing high-quality MRI images from sparsely-sampled k-space data, even affected by motion.
arXiv Detail & Related papers (2023-10-14T17:11:25Z) - Deep Cardiac MRI Reconstruction with ADMM [7.694990352622926]
We present a deep learning (DL)-based method for accelerated cine and multi-contrast reconstruction in the context of cardiac imaging.
Our method optimize in both the image and k-space domains, allowing for high reconstruction fidelity.
arXiv Detail & Related papers (2023-10-10T13:46:11Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Motion Correction and Volumetric Reconstruction for Fetal Functional
Magnetic Resonance Imaging Data [3.690756997172894]
Motion correction is an essential preprocessing step in functional Magnetic Resonance Imaging (fMRI) of the fetal brain.
Current motion correction approaches for fetal fMRI choose a single 3D volume from a specific acquisition timepoint.
We propose a novel framework, which estimates a high-resolution reference volume by using outlier-robust motion correction.
arXiv Detail & Related papers (2022-02-11T19:11:16Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - LAPNet: Non-rigid Registration derived in k-space for Magnetic Resonance
Imaging [28.404584219735074]
Motion correction techniques have been proposed to compensate for these types of motion during thoracic scans.
A particular interest and challenge lie in the derivation of reliable non-rigid motion fields from the undersampled motion-resolved data.
We propose a deep-learning based approach to perform fast and accurate non-rigid registration from the undersampled k-space data.
arXiv Detail & Related papers (2021-07-19T15:39:23Z) - Towards Ultrafast MRI via Extreme k-Space Undersampling and
Superresolution [65.25508348574974]
We go below the MRI acceleration factors reported by all published papers that reference the original fastMRI challenge.
We consider powerful deep learning based image enhancement methods to compensate for the underresolved images.
The quality of the reconstructed images surpasses that of the other methods, yielding an MSE of 0.00114, a PSNR of 29.6 dB, and an SSIM of 0.956 at x16 acceleration factor.
arXiv Detail & Related papers (2021-03-04T10:45:01Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.