WKGM: Weight-K-space Generative Model for Parallel Imaging
Reconstruction
- URL: http://arxiv.org/abs/2205.03883v1
- Date: Sun, 8 May 2022 14:28:20 GMT
- Title: WKGM: Weight-K-space Generative Model for Parallel Imaging
Reconstruction
- Authors: Zongjiang Tu, Die Liu, Xiaoqing Wang, Chen Jiang, Minghui Zhang,
Qiegen Liu, Dong Liang
- Abstract summary: We propose to explore the k-space domain via robust generative modeling for flexible PI reconstruction, coined weight-k-space generative model (WKGM)
WKGM is a generalized k-space domain model, where the k-space weighting technology and high-dimensional space strategy are efficiently incorporated for score-based generative model training, resulting in good and robust reconstruction.
Experimental results on datasets with varying sampling patterns and acceleration factors demonstrate that WKGM can attain state-of-the-art reconstruction results.
- Score: 15.555999296521476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parallel Imaging (PI) is one of the most im-portant and successful
developments in accelerating magnetic resonance imaging (MRI). Recently deep
learning PI has emerged as an effective technique to accelerate MRI.
Nevertheless, most approaches have so far been based image domain. In this
work, we propose to explore the k-space domain via robust generative modeling
for flexible PI reconstruction, coined weight-k-space generative model (WKGM).
Specifically, WKGM is a generalized k-space domain model, where the k-space
weighting technology and high-dimensional space strategy are efficiently
incorporated for score-based generative model training, resulting in good and
robust reconstruction. In addition, WKGM is flexible and thus can
synergistically combine various traditional k-space PI models, generating
learning-based priors to produce high-fidelity reconstructions. Experimental
results on datasets with varying sampling patterns and acceleration factors
demonstrate that WKGM can attain state-of-the-art reconstruction results under
the well-learned k-space generative prior.
Related papers
- Scalable Spatio-Temporal SE(3) Diffusion for Long-Horizon Protein Dynamics [51.85385061275941]
Molecular dynamics (MD) simulations remain the gold standard for studying protein dynamics.<n>Recent generative models have shown promise in accelerating simulations, yet they struggle with long-horizon generation.<n>We present STAR-MD, a scalable diffusion model that generates physically plausible protein trajectories over micro-scale timescales.
arXiv Detail & Related papers (2026-02-02T14:13:28Z) - On The Role of K-Space Acquisition in MRI Reconstruction Domain-Generalization [0.0]
We show that the benefits of learned k-space sampling can extend beyond the training domain, enabling superior reconstruction performance under domain shifts.<n>We propose a novel method that enhances domain robustness by introducing acquisition uncertainty during training-stochastically perturbing k-space trajectories to simulate variability across scanners and imaging conditions.
arXiv Detail & Related papers (2025-12-06T18:49:46Z) - K-Syn: K-space Data Synthesis in Ultra Low-data Regimes [0.7817545394809559]
This letter focuses on feature-level modeling in the frequency domain, enabling stable and rich generation even with ultra low-data regimes.<n>We integrate k-space data across time frames with multiple fusion strategies to steer and further optimize the generative trajectory.<n> Experimental results demonstrate that the proposed method possesses strong generative ability in low-data regimes.
arXiv Detail & Related papers (2025-09-04T12:25:05Z) - High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations [51.90920900332569]
Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data.<n>Recent approaches address this by introducing additional features along rigid geometric structures.<n>We propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR)
arXiv Detail & Related papers (2025-06-07T16:45:17Z) - AniGaussian: Animatable Gaussian Avatar with Pose-guided Deformation [51.61117351997808]
We introduce an innovative pose guided deformation strategy that constrains the dynamic Gaussian avatar with SMPL pose guidance.
We incorporate rigid-based priors from previous works to enhance the dynamic transform capabilities of the Gaussian model.
Through extensive comparisons with existing methods, AniGaussian demonstrates superior performance in both qualitative result and quantitative metrics.
arXiv Detail & Related papers (2025-02-24T06:53:37Z) - Maximizing domain generalization in fetal brain tissue segmentation: the role of synthetic data generation, intensity clustering and real image fine-tuning [1.1443262816483672]
Recent approaches based on domain randomization, like SynthSeg, have shown a great potential for single source domain generalization.
We show how to maximize the out-of-domain (OOD) generalization potential of SynthSeg-based methods in fetal brain MRI.
arXiv Detail & Related papers (2024-11-11T10:17:44Z) - Global k-Space Interpolation for Dynamic MRI Reconstruction using Masked
Image Modeling [10.74920257710449]
In dynamic Magnetic Imaging (MRI), k-space is typically undersampled due to limited scan time.
We propose a novel Transformer-based k-space Global Interpolation Network, termed k-GIN.
Our k-GIN learns global dependencies among low- and high-frequency components of 2D+t k-space and uses it to interpolate unsampled data.
arXiv Detail & Related papers (2023-07-24T10:20:14Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines [116.33694183176617]
We use a neural network based on the Recurrent Inference Machine to reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions.
arXiv Detail & Related papers (2023-01-10T19:00:12Z) - Universal Generative Modeling in Dual-domain for Dynamic MR Imaging [22.915796840971396]
We propose a k-space and image Du-al-Domain collaborative Universal Generative Model (DD-UGM) to reconstruct highly under-sampled measurements.
More precisely, we extract prior components from both image and k-space domains via a universal generative model and adaptively handle these prior components for faster processing.
arXiv Detail & Related papers (2022-12-15T03:04:48Z) - Learning Optimal K-space Acquisition and Reconstruction using
Physics-Informed Neural Networks [46.751292014516025]
Deep neural networks have been applied to reconstruct undersampled k-space data and have shown improved reconstruction performance.
This work proposes a novel framework to learn k-space sampling trajectories by considering it as an Ordinary Differential Equation (ODE) problem.
Experiments were conducted on different in-viv datasets (textite.g., brain and knee images) acquired with different sequences.
arXiv Detail & Related papers (2022-04-05T20:28:42Z) - K-space and Image Domain Collaborative Energy based Model for Parallel
MRI Reconstruction [21.317550364310343]
Decreasing magnetic resonance (MR) image acquisition times can potentially make MR examinations more accessible.
We propose a k-space and image domain collaborative generative model to comprehensively estimate the MR data from under-sampled measurement.
Experimental comparisons with the state-of-the-arts demonstrated that the proposed hybrid method has less error in reconstruction and is more stable under different acceleration factors.
arXiv Detail & Related papers (2022-03-21T07:38:59Z) - MRI Reconstruction Using Deep Energy-Based Model [21.748514538109173]
We propose a novel regularization strategy to take advantage of self-adversarial cogitation of the deep energy-based model.
In contrast to other generative models for reconstruction, the proposed method utilizes deep energy-based information as the image prior in reconstruction to improve the quality of image.
arXiv Detail & Related papers (2021-09-07T05:24:55Z) - Robust Compressed Sensing MRI with Deep Generative Priors [84.69062247243953]
We present the first successful application of the CSGM framework on clinical MRI data.
We train a generative prior on brain scans from the fastMRI dataset, and show that posterior sampling via Langevin dynamics achieves high quality reconstructions.
arXiv Detail & Related papers (2021-08-03T08:52:06Z) - Deep Gaussian Scale Mixture Prior for Spectral Compressive Imaging [48.34565372026196]
We propose a novel HSI reconstruction method based on the a Posterior (MAP) estimation framework.
We also propose to estimate the local means of the GSM models by the deep convolutional neural network (DCNN)
arXiv Detail & Related papers (2021-03-12T08:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.