Frequency Error-Guided Under-sampling Optimization for Multi-Contrast MRI Reconstruction
- URL: http://arxiv.org/abs/2601.09316v1
- Date: Wed, 14 Jan 2026 09:40:34 GMT
- Title: Frequency Error-Guided Under-sampling Optimization for Multi-Contrast MRI Reconstruction
- Authors: Xinming Fang, Chaoyan Huang, Juncheng Li, Jun Wang, Jun Shi, Guixu Zhang,
- Abstract summary: Multi-contrast MRI reconstruction has emerged as a promising direction by leveraging complementary information from fully-sampled reference scans.<n>Existing approaches suffer from three major limitations: (1) superficial reference fusion strategies, (2) insufficient utilization of the complementary information provided by the reference contrast, and (3) fixed under-sampling patterns.<n>We propose an efficient and interpretable frequency error-guided reconstruction framework to tackle these issues.
- Score: 24.246450246745905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic resonance imaging (MRI) plays a vital role in clinical diagnostics, yet it remains hindered by long acquisition times and motion artifacts. Multi-contrast MRI reconstruction has emerged as a promising direction by leveraging complementary information from fully-sampled reference scans. However, existing approaches suffer from three major limitations: (1) superficial reference fusion strategies, such as simple concatenation, (2) insufficient utilization of the complementary information provided by the reference contrast, and (3) fixed under-sampling patterns. We propose an efficient and interpretable frequency error-guided reconstruction framework to tackle these issues. We first employ a conditional diffusion model to learn a Frequency Error Prior (FEP), which is then incorporated into a unified framework for jointly optimizing both the under-sampling pattern and the reconstruction network. The proposed reconstruction model employs a model-driven deep unfolding framework that jointly exploits frequency- and image-domain information. In addition, a spatial alignment module and a reference feature decomposition strategy are incorporated to improve reconstruction quality and bridge model-based optimization with data-driven learning for improved physical interpretability. Comprehensive validation across multiple imaging modalities, acceleration rates (4-30x), and sampling schemes demonstrates consistent superiority over state-of-the-art methods in both quantitative metrics and visual quality. All codes are available at https://github.com/fangxinming/JUF-MRI.
Related papers
- Resolution-Independent Neural Operators for Multi-Rate Sparse-View CT [67.14700058302016]
Deep learning methods achieve high-fidelity reconstructions but often overfit to a fixed acquisition setup.<n>We propose Computed Tomography neural Operator (CTO), a unified CT reconstruction framework that extends to continuous function space.<n>CTO enables consistent multi-sampling-rate and cross-resolution performance, with on average >4dB PSNR gain over CNNs.
arXiv Detail & Related papers (2025-12-13T08:31:46Z) - Conditional Denoising Diffusion Model-Based Robust MR Image Reconstruction from Highly Undersampled Data [11.174208209806073]
Undersampling strategies can accelerate image acquisition, but they often result in image artifacts and degraded quality.<n>Recent diffusion models have shown promise for reconstructing high-fidelity images from undersampled data by learning powerful image priors.<n>We introduce a conditional denoising diffusion framework with iterative data-consistency correction.
arXiv Detail & Related papers (2025-10-07T18:01:08Z) - DuDoUniNeXt: Dual-domain unified hybrid model for single and
multi-contrast undersampled MRI reconstruction [24.937435059755288]
We propose DuDoUniNeXt, a unified dual-domain MRI reconstruction network that can accommodate to scenarios involving absent, low-quality, and high-quality reference images.
Experimental results demonstrate that the proposed model surpasses state-of-the-art SC and MC models significantly.
arXiv Detail & Related papers (2024-03-08T12:26:48Z) - Unsupervised Adaptive Implicit Neural Representation Learning for
Scan-Specific MRI Reconstruction [8.721677700107639]
We propose an unsupervised, adaptive coarse-to-fine framework that enhances reconstruction quality without being constrained by the sparsity levels or patterns in under-sampling.
We integrate a novel learning strategy that progressively refines the use of acquired k-space signals for self-supervision.
Our method outperforms current state-of-the-art scan-specific MRI reconstruction techniques, for up to 8-fold under-sampling.
arXiv Detail & Related papers (2023-12-01T16:00:16Z) - Fill the K-Space and Refine the Image: Prompting for Dynamic and
Multi-Contrast MRI Reconstruction [31.404228406642194]
The key to dynamic or multi-contrast magnetic resonance imaging (MRI) reconstruction lies in exploring inter-frame or inter-contrast information.
We propose a two-stage MRI reconstruction pipeline to address these limitations.
Our proposed method significantly outperforms previous state-of-the-art accelerated MRI reconstruction methods.
arXiv Detail & Related papers (2023-09-25T02:51:00Z) - CAMP-Net: Consistency-Aware Multi-Prior Network for Accelerated MRI
Reconstruction [4.967600587813224]
Undersampling k-space data in MRI reduces scan time but pose challenges in image reconstruction.
We propose CAMP-Net, an unrolling-based Consistency-Aware Multi-Prior Network for accelerated MRI reconstruction.
arXiv Detail & Related papers (2023-06-20T02:21:45Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Fast T2w/FLAIR MRI Acquisition by Optimal Sampling of Information
Complementary to Pre-acquired T1w MRI [52.656075914042155]
We propose an iterative framework to optimize the under-sampling pattern for MRI acquisition of another modality.
We have demonstrated superior performance of our learned under-sampling patterns on a public dataset.
arXiv Detail & Related papers (2021-11-11T04:04:48Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Multi-Modal MRI Reconstruction with Spatial Alignment Network [51.74078260367654]
In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
arXiv Detail & Related papers (2021-08-12T08:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.