CPT-Interp: Continuous sPatial and Temporal Motion Modeling for 4D Medical Image Interpolation
- URL: http://arxiv.org/abs/2405.15385v1
- Date: Fri, 24 May 2024 09:35:42 GMT
- Title: CPT-Interp: Continuous sPatial and Temporal Motion Modeling for 4D Medical Image Interpolation
- Authors: Xia Li, Runzhao Yang, Xiangtai Li, Antony Lomax, Ye Zhang, Joachim Buhmann,
- Abstract summary: Motion information from 4D medical imaging offers critical insights into dynamic changes in patient anatomy for clinical assessments and radiotherapy planning.
However, inherent physical and technical constraints of imaging hardware often necessitate a compromise between temporal resolution and image quality.
We propose a novel approach for continuously modeling patient anatomic motion using implicit neural representation.
- Score: 22.886841531680567
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motion information from 4D medical imaging offers critical insights into dynamic changes in patient anatomy for clinical assessments and radiotherapy planning and, thereby, enhances the capabilities of 3D image analysis. However, inherent physical and technical constraints of imaging hardware often necessitate a compromise between temporal resolution and image quality. Frame interpolation emerges as a pivotal solution to this challenge. Previous methods often suffer from discretion when they estimate the intermediate motion and execute the forward warping. In this study, we draw inspiration from fluid mechanics to propose a novel approach for continuously modeling patient anatomic motion using implicit neural representation. It ensures both spatial and temporal continuity, effectively bridging Eulerian and Lagrangian specifications together to naturally facilitate continuous frame interpolation. Our experiments across multiple datasets underscore the method's superior accuracy and speed. Furthermore, as a case-specific optimization (training-free) approach, it circumvents the need for extensive datasets and addresses model generalization issues.
Related papers
- ST-NeRP: Spatial-Temporal Neural Representation Learning with Prior Embedding for Patient-specific Imaging Study [16.383405461343678]
We propose a strategy of spatial-temporal Neural Representation learning with Prior embedding (ST-NeRP) for patient-specific imaging study.
Our strategy involves leveraging an Implicit Neural Representation (INR) network to encode the image at the reference time point into a prior embedding.
This network is trained using the whole patient-specific image sequence, enabling the prediction of deformation fields at various target time points.
arXiv Detail & Related papers (2024-10-25T03:33:17Z) - Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - TEAM PILOT -- Learned Feasible Extendable Set of Dynamic MRI Acquisition Trajectories [2.7719338074999547]
We introduce a novel deep-compressed sensing approach that uses 3D window attention and flexible, temporally extendable acquisition trajectories.
Our method significantly reduces both training and inference times compared to existing approaches.
Tests with real data show that our approach outperforms current state-of-theart techniques.
arXiv Detail & Related papers (2024-09-19T13:45:13Z) - Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - SpaER: Learning Spatio-temporal Equivariant Representations for Fetal Brain Motion Tracking [6.417960463128722]
SpaER is a pioneering method for fetal motion tracking.
We develop an equivariant neural network that efficiently learns rigid motion sequences.
We validate our model using real fetal echo-planar images with simulated and real motions.
arXiv Detail & Related papers (2024-07-29T17:24:52Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Single-shot Tomography of Discrete Dynamic Objects [1.1407697960152927]
We present a novel method for the reconstruction of high-resolution temporal images in dynamic tomographic imaging.
The implications of this research extend to improved visualization and analysis of dynamic processes in tomographic imaging.
arXiv Detail & Related papers (2023-11-09T10:52:02Z) - Imposing Temporal Consistency on Deep Monocular Body Shape and Pose
Estimation [67.23327074124855]
This paper presents an elegant solution for the integration of temporal constraints in the fitting process.
We derive parameters of a sequence of body models, representing shape and motion of a person, including jaw poses, facial expressions, and finger poses.
Our approach enables the derivation of realistic 3D body models from image sequences, including facial expression and articulated hands.
arXiv Detail & Related papers (2022-02-07T11:11:55Z) - Neural Computed Tomography [1.7188280334580197]
Motion during acquisition of a set of projections can lead to significant motion artifacts in computed tomography reconstructions.
We propose a novel reconstruction framework, NeuralCT, to generate time-resolved images free from motion artifacts.
arXiv Detail & Related papers (2022-01-17T18:50:58Z) - A Deep Learning Approach for Motion Forecasting Using 4D OCT Data [69.62333053044712]
We propose 4D-temporal deep learning for end-to-end motion forecasting and estimation using a stream of OCT volumes.
Our best performing 4D method achieves motion forecasting with an overall average correlation of 97.41%, while also improving motion estimation performance by a factor of 2.5 compared to a previous 3D approach.
arXiv Detail & Related papers (2020-04-21T15:59:53Z) - Spatio-Temporal Deep Learning Methods for Motion Estimation Using 4D OCT
Image Data [63.73263986460191]
Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions.
We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance.
Using 4D information for the model input improves performance while maintaining reasonable inference times.
arXiv Detail & Related papers (2020-04-21T15:43:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.