A New k-Space Model for Non-Cartesian Fourier Imaging
- URL: http://arxiv.org/abs/2505.05647v1
- Date: Thu, 08 May 2025 21:06:40 GMT
- Title: A New k-Space Model for Non-Cartesian Fourier Imaging
- Authors: Chin-Cheng Chan, Justin P. Haldar,
- Abstract summary: We propose a new model that is more resilient to the limitations (old and new) of the previous approach.<n>Specifically, the new model is based on a Fourier-domain basis expansion rather than the standard image-domain voxel-based approach.<n>Illustrative results are presented in the context of non-Cartesian MRI reconstruction.
- Score: 2.1730897280147814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the past several decades, it has been popular to reconstruct Fourier imaging data using model-based approaches that can easily incorporate physical constraints and advanced regularization/machine learning priors. The most common modeling approach is to represent the continuous image as a linear combination of shifted "voxel" basis functions. Although well-studied and widely-deployed, this voxel-based model is associated with longstanding limitations, including high computational costs, slow convergence, and a propensity for artifacts. In this work, we reexamine this model from a fresh perspective, identifying new issues that may have been previously overlooked (including undesirable approximation, periodicity, and nullspace characteristics). Our insights motivate us to propose a new model that is more resilient to the limitations (old and new) of the previous approach. Specifically, the new model is based on a Fourier-domain basis expansion rather than the standard image-domain voxel-based approach. Illustrative results, which are presented in the context of non-Cartesian MRI reconstruction, demonstrate that the new model enables improved image quality (reduced artifacts) and/or reduced computational complexity (faster computations and improved convergence).
Related papers
- NODER: Image Sequence Regression Based on Neural Ordinary Differential Equations [2.711538918087856]
We propose an optimization-based new framework called NODER, which leverages neural ordinary differential equations to capture complex underlying dynamics.
Our model needs only a couple of images in a sequence for prediction, which is practical, especially for clinical situations.
arXiv Detail & Related papers (2024-07-18T07:50:46Z) - Diffeomorphic Template Registration for Atmospheric Turbulence Mitigation [50.16004183320537]
We describe a method for recovering the irradiance underlying a collection of images corrupted by atmospheric turbulence.
We select one of the images as a reference, and model the deformation in this image by the aggregation of the optical flow from it to other images.
We achieve state-of-the-art performance despite its simplicity.
arXiv Detail & Related papers (2024-05-06T17:39:53Z) - FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis [48.9652334528436]
We introduce an innovative, training-free approach FouriScale from the perspective of frequency domain analysis.
We replace the original convolutional layers in pre-trained diffusion models by incorporating a dilation technique along with a low-pass operation.
Our method successfully balances the structural integrity and fidelity of generated images, achieving an astonishing capacity of arbitrary-size, high-resolution, and high-quality generation.
arXiv Detail & Related papers (2024-03-19T17:59:33Z) - Low-resolution Prior Equilibrium Network for CT Reconstruction [3.5639148953570836]
We present a novel deep learning-based CT reconstruction model, where the low-resolution image is introduced to obtain an effective regularization term for improving the networks robustness.
Experimental results on both sparse-view and limited-angle reconstruction problems are provided, demonstrating that our end-to-end low-resolution prior equilibrium model outperforms other state-of-the-art methods in terms of noise reduction, contrast-to-noise ratio, and preservation of edge details.
arXiv Detail & Related papers (2024-01-28T13:59:58Z) - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Model-corrected learned primal-dual models for fast limited-view
photoacoustic tomography [2.631277214890658]
Learned iterative reconstructions hold promise to accelerate tomographic imaging with empirical robustness to model perturbations.
Computational feasibility can be obtained by the use of fast approximate models, but a need to compensate model errors arises.
We advance the methodological and theoretical basis for model corrections in learned image reconstructions by embedding the model correction in a learned primal-dual framework.
arXiv Detail & Related papers (2023-04-04T17:13:22Z) - Universal Generative Modeling in Dual-domain for Dynamic MR Imaging [22.915796840971396]
We propose a k-space and image Du-al-Domain collaborative Universal Generative Model (DD-UGM) to reconstruct highly under-sampled measurements.
More precisely, we extract prior components from both image and k-space domains via a universal generative model and adaptively handle these prior components for faster processing.
arXiv Detail & Related papers (2022-12-15T03:04:48Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.