CPT-Interp: Continuous sPatial and Temporal Motion Modeling for 4D Medical Image Interpolation
- URL: http://arxiv.org/abs/2405.15385v1
- Date: Fri, 24 May 2024 09:35:42 GMT
- Title: CPT-Interp: Continuous sPatial and Temporal Motion Modeling for 4D Medical Image Interpolation
- Authors: Xia Li, Runzhao Yang, Xiangtai Li, Antony Lomax, Ye Zhang, Joachim Buhmann,
- Abstract summary: Motion information from 4D medical imaging offers critical insights into dynamic changes in patient anatomy for clinical assessments and radiotherapy planning.
However, inherent physical and technical constraints of imaging hardware often necessitate a compromise between temporal resolution and image quality.
We propose a novel approach for continuously modeling patient anatomic motion using implicit neural representation.
- Score: 22.886841531680567
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motion information from 4D medical imaging offers critical insights into dynamic changes in patient anatomy for clinical assessments and radiotherapy planning and, thereby, enhances the capabilities of 3D image analysis. However, inherent physical and technical constraints of imaging hardware often necessitate a compromise between temporal resolution and image quality. Frame interpolation emerges as a pivotal solution to this challenge. Previous methods often suffer from discretion when they estimate the intermediate motion and execute the forward warping. In this study, we draw inspiration from fluid mechanics to propose a novel approach for continuously modeling patient anatomic motion using implicit neural representation. It ensures both spatial and temporal continuity, effectively bridging Eulerian and Lagrangian specifications together to naturally facilitate continuous frame interpolation. Our experiments across multiple datasets underscore the method's superior accuracy and speed. Furthermore, as a case-specific optimization (training-free) approach, it circumvents the need for extensive datasets and addresses model generalization issues.
Related papers
- ImageFlowNet: Forecasting Multiscale Trajectories of Disease Progression with Irregularly-Sampled Longitudinal Medical Images [44.107186498384024]
ImageFlowNet is a novel framework that learns latent-space flow fields that evolve multiscale representations in joint embedding spaces.
We provide theoretical insights that support our formulation of ODEs, and motivate our regularizations involving high-level visual features.
arXiv Detail & Related papers (2024-06-20T23:51:32Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Single-shot Tomography of Discrete Dynamic Objects [1.1407697960152927]
We present a novel method for the reconstruction of high-resolution temporal images in dynamic tomographic imaging.
The implications of this research extend to improved visualization and analysis of dynamic processes in tomographic imaging.
arXiv Detail & Related papers (2023-11-09T10:52:02Z) - INeAT: Iterative Neural Adaptive Tomography [34.84974955073465]
Iterative Neural Adaptive Tomography (INeAT) incorporates posture optimization to counteract the influence of posture perturbations in data.
We demonstrate that INeAT achieves artifact-suppressed and resolution-enhanced reconstruction in scenarios with significant pose disturbances.
arXiv Detail & Related papers (2023-11-03T01:00:36Z) - Imposing Temporal Consistency on Deep Monocular Body Shape and Pose
Estimation [67.23327074124855]
This paper presents an elegant solution for the integration of temporal constraints in the fitting process.
We derive parameters of a sequence of body models, representing shape and motion of a person, including jaw poses, facial expressions, and finger poses.
Our approach enables the derivation of realistic 3D body models from image sequences, including facial expression and articulated hands.
arXiv Detail & Related papers (2022-02-07T11:11:55Z) - Neural Computed Tomography [1.7188280334580197]
Motion during acquisition of a set of projections can lead to significant motion artifacts in computed tomography reconstructions.
We propose a novel reconstruction framework, NeuralCT, to generate time-resolved images free from motion artifacts.
arXiv Detail & Related papers (2022-01-17T18:50:58Z) - STRESS: Super-Resolution for Dynamic Fetal MRI using Self-Supervised
Learning [2.5581619987137048]
We propose STRESS, a self-supervised super-resolution framework for dynamic fetal MRI with interleaved slice acquisitions.
Our proposed method simulates an interleaved slice acquisition along the high-resolution axis on the originally acquired data to generate pairs of low- and high-resolution images.
Evaluations on both simulated and in utero data show that our proposed method outperforms other self-supervised super-resolution methods.
arXiv Detail & Related papers (2021-06-23T13:52:11Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - A Novel Approach for Correcting Multiple Discrete Rigid In-Plane Motions
Artefacts in MRI Scans [63.28835187934139]
We propose a novel method for removing motion artefacts using a deep neural network with two input branches.
The proposed method can be applied to artefacts generated by multiple movements of the patient.
arXiv Detail & Related papers (2020-06-24T15:25:11Z) - A Deep Learning Approach for Motion Forecasting Using 4D OCT Data [69.62333053044712]
We propose 4D-temporal deep learning for end-to-end motion forecasting and estimation using a stream of OCT volumes.
Our best performing 4D method achieves motion forecasting with an overall average correlation of 97.41%, while also improving motion estimation performance by a factor of 2.5 compared to a previous 3D approach.
arXiv Detail & Related papers (2020-04-21T15:59:53Z) - Spatio-Temporal Deep Learning Methods for Motion Estimation Using 4D OCT
Image Data [63.73263986460191]
Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions.
We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance.
Using 4D information for the model input improves performance while maintaining reasonable inference times.
arXiv Detail & Related papers (2020-04-21T15:43:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.