Dynamic imaging using motion-compensated smoothness regularization on
manifolds (MoCo-SToRM)
- URL: http://arxiv.org/abs/2111.10887v1
- Date: Sun, 21 Nov 2021 19:52:01 GMT
- Title: Dynamic imaging using motion-compensated smoothness regularization on
manifolds (MoCo-SToRM)
- Authors: Qing Zou, Luis A. Torres, Sean B. Fain, Mathews Jacob
- Abstract summary: We introduce an unsupervised deep manifold learning algorithm for motion-compensated dynamic MRI.
The utility of the algorithm is demonstrated in the context of motion-compensated high-resolution lung MRI.
- Score: 23.093076134206513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce an unsupervised deep manifold learning algorithm for
motion-compensated dynamic MRI. We assume that the motion fields in a
free-breathing lung MRI dataset live on a manifold. The motion field at each
time instant is modeled as the output of a deep generative model, driven by
low-dimensional time-varying latent vectors that capture the temporal
variability. The images at each time instant are modeled as the deformed
version of an image template using the above motion fields. The template, the
parameters of the deep generator, and the latent vectors are learned from the
k-t space data in an unsupervised fashion. The manifold motion model serves as
a regularizer, making the joint estimation of the motion fields and images from
few radial spokes/frame well-posed. The utility of the algorithm is
demonstrated in the context of motion-compensated high-resolution lung MRI.
Related papers
- Highly efficient non-rigid registration in k-space with application to cardiac Magnetic Resonance Imaging [10.618048010632728]
We propose a novel self-supervised deep learning-based framework, dubbed the Local-All Pass Attention Network (LAPANet) for non-rigid motion estimation.
LAPANet was evaluated on cardiac motion estimation across various sampling trajectories and acceleration rates.
The achieved high temporal resolution (less than 5 ms) for non-rigid motion opens new avenues for motion detection, tracking and correction in dynamic and real-time MRI applications.
arXiv Detail & Related papers (2024-10-24T15:19:59Z) - SpaER: Learning Spatio-temporal Equivariant Representations for Fetal Brain Motion Tracking [6.417960463128722]
SpaER is a pioneering method for fetal motion tracking.
We develop an equivariant neural network that efficiently learns rigid motion sequences.
We validate our model using real fetal echo-planar images with simulated and real motions.
arXiv Detail & Related papers (2024-07-29T17:24:52Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - Generative Image Dynamics [80.70729090482575]
We present an approach to modeling an image-space prior on scene motion.
Our prior is learned from a collection of motion trajectories extracted from real video sequences.
arXiv Detail & Related papers (2023-09-14T17:54:01Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - MomentaMorph: Unsupervised Spatial-Temporal Registration with Momenta,
Shooting, and Correction [12.281250177881445]
We introduce a novel framework for Lagrangian motion estimation in the presence of repetitive patterns and large motion.
The results on a 2D synthetic dataset and a real 3D tMRI dataset demonstrate our method's efficiency.
arXiv Detail & Related papers (2023-08-05T20:32:30Z) - Dynamic imaging using Motion-Compensated SmooThness Regularization on
Manifolds (MoCo-SToRM) [19.70386996879205]
We introduce an unsupervised motion-compensated reconstruction scheme for high-resolution free-breathing pulmonary MRI.
We model the image frames in the time series as the deformed version of the 3D template image volume.
We assume the deformation maps to be points on a smooth manifold in high-dimensional space.
arXiv Detail & Related papers (2021-12-06T22:04:57Z) - MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary
Monocular Cameras [98.40768911788854]
We introduce MoCo-Flow, a representation that models the dynamic scene using a 4D continuous time-variant function.
At the heart of our work lies a novel optimization formulation, which is constrained by a motion consensus regularization on the motion flow.
We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity.
arXiv Detail & Related papers (2021-06-08T16:03:50Z) - Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening
Simulations [58.720142291102135]
Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract coherent schemes.
This paper proposes a strategy to enable DMD to extract from observations with different mesh topologies and dimensions.
arXiv Detail & Related papers (2021-04-28T22:14:25Z) - Learning a Generative Motion Model from Image Sequences based on a
Latent Motion Matrix [8.774604259603302]
We learn a probabilistic motion model from simulating temporal-temporal registration in a sequence of images.
We show improved registration accuracy-temporally smoother consistencys compared to three state-of-the-art registration algorithms.
We also demonstrate the model's applicability for motion analysis, simulation and super-resolution by an improved motion reconstruction from sequences with missing frames.
arXiv Detail & Related papers (2020-11-03T14:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.