MotionTTT: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected MRI
- URL: http://arxiv.org/abs/2409.09370v1
- Date: Sat, 14 Sep 2024 08:51:33 GMT
- Title: MotionTTT: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected MRI
- Authors: Tobit Klug, Kun Wang, Stefan Ruschke, Reinhard Heckel,
- Abstract summary: We propose a deep learning-based test-time-training method for accurate motion estimation.
We show that our method can provably reconstruct motion parameters for a simple signal and neural network model.
- Score: 24.048132427816704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major challenge of the long measurement times in magnetic resonance imaging (MRI), an important medical imaging technology, is that patients may move during data acquisition. This leads to severe motion artifacts in the reconstructed images and volumes. In this paper, we propose a deep learning-based test-time-training method for accurate motion estimation. The key idea is that a neural network trained for motion-free reconstruction has a small loss if there is no motion, thus optimizing over motion parameters passed through the reconstruction network enables accurate estimation of motion. The estimated motion parameters enable to correct for the motion and to reconstruct accurate motion-corrected images. Our method uses 2D reconstruction networks to estimate rigid motion in 3D, and constitutes the first deep learning based method for 3D rigid motion estimation towards 3D-motion-corrected MRI. We show that our method can provably reconstruct motion parameters for a simple signal and neural network model. We demonstrate the effectiveness of our method for both retrospectively simulated motion and prospectively collected real motion-corrupted data.
Related papers
- Highly efficient non-rigid registration in k-space with application to cardiac Magnetic Resonance Imaging [10.618048010632728]
We propose a novel self-supervised deep learning-based framework, dubbed the Local-All Pass Attention Network (LAPANet) for non-rigid motion estimation.
LAPANet was evaluated on cardiac motion estimation across various sampling trajectories and acceleration rates.
The achieved high temporal resolution (less than 5 ms) for non-rigid motion opens new avenues for motion detection, tracking and correction in dynamic and real-time MRI applications.
arXiv Detail & Related papers (2024-10-24T15:19:59Z) - Motion-Informed Deep Learning for Brain MR Image Reconstruction Framework [7.639405634241267]
Motion is estimated to be present in approximately 30% of clinical MRI scans.
Deep learning algorithms have been demonstrated to be effective for both the image reconstruction task and the motion correction task.
We propose a novel method to simultaneously accelerate imaging and correct motion.
arXiv Detail & Related papers (2024-05-28T02:16:35Z) - SISMIK for brain MRI: Deep-learning-based motion estimation and model-based motion correction in k-space [0.0]
We propose a retrospective method for motion estimation and correction for 2D Spin-Echo scans of the brain.
The method leverages the power of deep neural networks to estimate motion parameters in k-space.
It uses a model-based approach to restore degraded images to avoid ''hallucinations''
arXiv Detail & Related papers (2023-12-20T17:38:56Z) - Unsupervised 3D Pose Estimation with Non-Rigid Structure-from-Motion
Modeling [83.76377808476039]
We propose a new modeling method for human pose deformations and design an accompanying diffusion-based motion prior.
Inspired by the field of non-rigid structure-from-motion, we divide the task of reconstructing 3D human skeletons in motion into the estimation of a 3D reference skeleton.
A mixed spatial-temporal NRSfMformer is used to simultaneously estimate the 3D reference skeleton and the skeleton deformation of each frame from 2D observations sequence.
arXiv Detail & Related papers (2023-08-18T16:41:57Z) - Data Consistent Deep Rigid MRI Motion Correction [9.551748050454378]
Motion artifacts are a pervasive problem in MRI, leading to misdiagnosis or mischaracterization in population-level imaging studies.
Current retrospective rigid intra-slice motion correction techniques jointly optimize estimates of the image and the motion parameters.
In this paper, we use a deep network to reduce the joint image-motion parameter search to a search over rigid motion parameters alone.
arXiv Detail & Related papers (2023-01-25T00:21:31Z) - MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks [77.56526918859345]
We present a novel framework that brings the 3D motion task from controlled environments to in-the-wild scenarios.
It is capable of body motion from a character in a 2D monocular video to a 3D character without using any motion capture system or 3D reconstruction procedure.
arXiv Detail & Related papers (2021-12-19T07:52:05Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z) - Inertial Measurements for Motion Compensation in Weight-bearing
Cone-beam CT of the Knee [6.7461735822055715]
Involuntary motion during CT scans of the knee causes artifacts in the reconstructed volumes making them unusable for clinical diagnosis.
We propose to attach an inertial measurement unit (IMU) to the leg of the subject in order to measure the motion during the scan and correct for it.
arXiv Detail & Related papers (2020-07-09T09:26:27Z) - Motion Guided 3D Pose Estimation from Videos [81.14443206968444]
We propose a new loss function, called motion loss, for the problem of monocular 3D Human pose estimation from 2D pose.
In computing motion loss, a simple yet effective representation for keypoint motion, called pairwise motion encoding, is introduced.
We design a new graph convolutional network architecture, U-shaped GCN (UGCN), which captures both short-term and long-term motion information.
arXiv Detail & Related papers (2020-04-29T06:59:30Z) - A Deep Learning Approach for Motion Forecasting Using 4D OCT Data [69.62333053044712]
We propose 4D-temporal deep learning for end-to-end motion forecasting and estimation using a stream of OCT volumes.
Our best performing 4D method achieves motion forecasting with an overall average correlation of 97.41%, while also improving motion estimation performance by a factor of 2.5 compared to a previous 3D approach.
arXiv Detail & Related papers (2020-04-21T15:59:53Z) - Spatio-Temporal Deep Learning Methods for Motion Estimation Using 4D OCT
Image Data [63.73263986460191]
Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions.
We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance.
Using 4D information for the model input improves performance while maintaining reasonable inference times.
arXiv Detail & Related papers (2020-04-21T15:43:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.