A Diffusion-Driven Temporal Super-Resolution and Spatial Consistency Enhancement Framework for 4D MRI imaging
- URL: http://arxiv.org/abs/2506.04116v2
- Date: Mon, 09 Jun 2025 01:39:48 GMT
- Title: A Diffusion-Driven Temporal Super-Resolution and Spatial Consistency Enhancement Framework for 4D MRI imaging
- Authors: Xuanru Zhou, Jiarun Liu, Shoujun Yu, Hao Yang, Cheng Li, Tao Tan, Shanshan Wang,
- Abstract summary: In medical imaging, 4D MRI enables dynamic 3D visualization, yet the trade-off between spatial and temporal resolution requires prolonged scan time.<n>Traditional approaches typically rely on registration-based to generate intermediate frames.<n>We propose TSSC-Net, a novel framework that generates intermediate frames while preserving spatial consistency.
- Score: 9.016385222343715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In medical imaging, 4D MRI enables dynamic 3D visualization, yet the trade-off between spatial and temporal resolution requires prolonged scan time that can compromise temporal fidelity--especially during rapid, large-amplitude motion. Traditional approaches typically rely on registration-based interpolation to generate intermediate frames. However, these methods struggle with large deformations, resulting in misregistration, artifacts, and diminished spatial consistency. To address these challenges, we propose TSSC-Net, a novel framework that generates intermediate frames while preserving spatial consistency. To improve temporal fidelity under fast motion, our diffusion-based temporal super-resolution network generates intermediate frames using the start and end frames as key references, achieving 6x temporal super-resolution in a single inference step. Additionally, we introduce a novel tri-directional Mamba-based module that leverages long-range contextual information to effectively resolve spatial inconsistencies arising from cross-slice misalignment, thereby enhancing volumetric coherence and correcting cross-slice errors. Extensive experiments were performed on the public ACDC cardiac MRI dataset and a real-world dynamic 4D knee joint dataset. The results demonstrate that TSSC-Net can generate high-resolution dynamic MRI from fast-motion data while preserving structural fidelity and spatial consistency.
Related papers
- From Coarse to Continuous: Progressive Refinement Implicit Neural Representation for Motion-Robust Anisotropic MRI Reconstruction [15.340881123379567]
In MRI, slice-to-volume reconstruction is critical for recovering consistent 3D brain volumes from 2D slices.<n>We propose a progressive refinement implicit neural representation framework (PR-INR)<n>Our PR-INR unifies motion correction, structural refinement, and volumetric synthesis within a geometry-aware coordinate space.
arXiv Detail & Related papers (2025-06-19T10:58:43Z) - Subspace Implicit Neural Representations for Real-Time Cardiac Cine MR Imaging [9.373081514803303]
We propose a reconstruction framework based on subspace implicit neural representations for real-time cardiac cine MRI of continuously sampled radial data.<n>Our method directly utilizes the continuously sampled radial k-space spokes during training, thereby eliminating the need for binning and non-uniform FFT.
arXiv Detail & Related papers (2024-12-17T10:06:37Z) - Event-boosted Deformable 3D Gaussians for Dynamic Scene Reconstruction [50.873820265165975]
We introduce the first approach combining event cameras, which capture high-temporal-resolution, continuous motion data, with deformable 3D-GS for dynamic scene reconstruction.<n>We propose a GS-Threshold Joint Modeling strategy, creating a mutually reinforcing process that greatly improves both 3D reconstruction and threshold modeling.<n>We contribute the first event-inclusive 4D benchmark with synthetic and real-world dynamic scenes, on which our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-25T08:23:38Z) - CPT-Interp: Continuous sPatial and Temporal Motion Modeling for 4D Medical Image Interpolation [22.886841531680567]
Motion information from 4D medical imaging offers critical insights into dynamic changes in patient anatomy for clinical assessments and radiotherapy planning.
However, inherent physical and technical constraints of imaging hardware often necessitate a compromise between temporal resolution and image quality.
We propose a novel approach for continuously modeling patient anatomic motion using implicit neural representation.
arXiv Detail & Related papers (2024-05-24T09:35:42Z) - Motion2VecSets: 4D Latent Vector Set Diffusion for Non-rigid Shape Reconstruction and Tracking [52.393359791978035]
Motion2VecSets is a 4D diffusion model for dynamic surface reconstruction from point cloud sequences.
We parameterize 4D dynamics with latent sets instead of using global latent codes.
For more temporally-coherent object tracking, we synchronously denoise deformation latent sets and exchange information across multiple frames.
arXiv Detail & Related papers (2024-01-12T15:05:08Z) - Robust Depth Linear Error Decomposition with Double Total Variation and
Nuclear Norm for Dynamic MRI Reconstruction [15.444386058967579]
There are still problems with dynamic MRI k-space reconstruction based on Compressed Sensing (CS)
In this paper, we propose a novel robust lowrank dynamic MRI reconstruction optimization model via highly under-sampled Fourier Transform (DFT)
Experiments on dynamic MRI data demonstrate the superior performance proposed method in terms of both reconstruction accuracy and time complexity.
arXiv Detail & Related papers (2023-10-23T13:34:59Z) - Alignment-free HDR Deghosting with Semantics Consistent Transformer [76.91669741684173]
High dynamic range imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output.
Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion.
We propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules.
arXiv Detail & Related papers (2023-05-29T15:03:23Z) - Local-Global Temporal Difference Learning for Satellite Video Super-Resolution [53.03380679343968]
We propose to exploit the well-defined temporal difference for efficient and effective temporal compensation.<n>To fully utilize the local and global temporal information within frames, we systematically modeled the short-term and long-term temporal discrepancies.<n> Rigorous objective and subjective evaluations conducted across five mainstream video satellites demonstrate that our method performs favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-10T07:04:40Z) - Coarse-Super-Resolution-Fine Network (CoSF-Net): A Unified End-to-End
Neural Network for 4D-MRI with Simultaneous Motion Estimation and
Super-Resolution [21.75329634476446]
We develop a novel deep learning framework called the coarse-super-resolution-fine network (CoSF-Net) to achieve simultaneous motion estimation and excavating super-resolution in a unified model.
Compared with existing networks and three state-of-the-art conventional algorithms, CoSF-Net not only accurately estimated the deformable vector fields between the respiratory phases of 4D-MRI but also simultaneously improved the spatial resolution of 4D-MRI with enhanced anatomic features.
arXiv Detail & Related papers (2022-11-21T01:42:51Z) - DDoS-UNet: Incorporating temporal information using Dynamic Dual-channel
UNet for enhancing super-resolution of dynamic MRI [0.27998963147546135]
Magnetic resonance imaging (MRI) provides high spatial resolution and excellent soft-tissue contrast without using harmful ionising radiation.
MRI with high temporal resolution suffers from limited spatial resolution.
Deep learning based super-resolution approaches have been proposed to mitigate this trade-off.
This research addresses the problem by creating a deep learning model which attempts to learn both spatial and temporal relationships.
arXiv Detail & Related papers (2022-02-10T22:20:58Z) - Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based
Motion Recognition [62.46544616232238]
Previous motion recognition methods have achieved promising performance through the tightly coupled multi-temporal representation.
We propose to decouple and recouple caused caused representation for RGB-D-based motion recognition.
arXiv Detail & Related papers (2021-12-16T18:59:47Z) - 4D Spatio-Temporal Convolutional Networks for Object Position Estimation
in OCT Volumes [69.62333053044712]
3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single OCT images.
We extend 3D CNNs to 4D-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking.
arXiv Detail & Related papers (2020-07-02T12:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.