Decoupling Multi-Contrast Super-Resolution: Pairing Unpaired Synthesis with Implicit Representations
- URL: http://arxiv.org/abs/2505.05855v1
- Date: Fri, 09 May 2025 07:48:52 GMT
- Title: Decoupling Multi-Contrast Super-Resolution: Pairing Unpaired Synthesis with Implicit Representations
- Authors: Hongyu Rui, Yinzhe Wu, Fanwen Wang, Jiahao Huang, Liutao Yang, Zi Wang, Guang Yang,
- Abstract summary: Multi-Contrast Super-Resolution techniques can boost the quality of their low-resolution counterparts.<n>Existing MCSR methods often assume fixed resolution settings and all require large, perfectly paired training datasets.<n>We propose a novel Modular Multi-Contrast Super-Resolution framework that eliminates the need for paired training data and supports arbitrary upscaling.
- Score: 6.255537948555454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic Resonance Imaging (MRI) is critical for clinical diagnostics but is often limited by long acquisition times and low signal-to-noise ratios, especially in modalities like diffusion and functional MRI. The multi-contrast nature of MRI presents a valuable opportunity for cross-modal enhancement, where high-resolution (HR) modalities can serve as references to boost the quality of their low-resolution (LR) counterparts-motivating the development of Multi-Contrast Super-Resolution (MCSR) techniques. Prior work has shown that leveraging complementary contrasts can improve SR performance; however, effective feature extraction and fusion across modalities with varying resolutions remains a major challenge. Moreover, existing MCSR methods often assume fixed resolution settings and all require large, perfectly paired training datasets-conditions rarely met in real-world clinical environments. To address these challenges, we propose a novel Modular Multi-Contrast Super-Resolution (MCSR) framework that eliminates the need for paired training data and supports arbitrary upscaling. Our method decouples the MCSR task into two stages: (1) Unpaired Cross-Modal Synthesis (U-CMS), which translates a high-resolution reference modality into a synthesized version of the target contrast, and (2) Unsupervised Super-Resolution (U-SR), which reconstructs the final output using implicit neural representations (INRs) conditioned on spatial coordinates. This design enables scale-agnostic and anatomically faithful reconstruction by bridging un-paired cross-modal synthesis with unsupervised resolution enhancement. Experiments show that our method achieves superior performance at 4x and 8x upscaling, with improved fidelity and anatomical consistency over existing baselines. Our framework demonstrates strong potential for scalable, subject-specific, and data-efficient MCSR in real-world clinical settings.
Related papers
- Nexus-INR: Diverse Knowledge-guided Arbitrary-Scale Multimodal Medical Image Super-Resolution [14.992795611397579]
Arbitrary-resolution super-resolution provides crucial flexibility for medical image analysis by adapting to diverse spatial resolutions.<n>Traditional CNN-based methods are inherently ill-suited for ARSR, as they are typically designed for fixed upsampling factors.<n>We propose Nexus-INR, a Diverse Knowledge-guided ARSR framework, which employs varied information and downstream tasks to achieve high-quality, adaptive-resolution medical image super-resolution.
arXiv Detail & Related papers (2025-08-05T04:44:35Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions.<n>Our model improves SSIM by 11% and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet) with 600$times$ faster inference than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Inter-slice Super-resolution of Magnetic Resonance Images by Pre-training and Self-supervised Fine-tuning [49.197385954021456]
In clinical practice, 2D magnetic resonance (MR) sequences are widely adopted. While individual 2D slices can be stacked to form a 3D volume, the relatively large slice spacing can pose challenges for visualization and subsequent analysis tasks.
To reduce slice spacing, deep-learning-based super-resolution techniques are widely investigated.
Most current solutions require a substantial number of paired high-resolution and low-resolution images for supervised training, which are typically unavailable in real-world scenarios.
arXiv Detail & Related papers (2024-06-10T02:20:26Z) - Frequency-Assisted Mamba for Remote Sensing Image Super-Resolution [49.902047563260496]
We develop the first attempt to integrate the Vision State Space Model (Mamba) for remote sensing image (RSI) super-resolution.
To achieve better SR reconstruction, building upon Mamba, we devise a Frequency-assisted Mamba framework, dubbed FMSR.
Our FMSR features a multi-level fusion architecture equipped with the Frequency Selection Module (FSM), Vision State Space Module (VSSM), and Hybrid Gate Module (HGM)
arXiv Detail & Related papers (2024-05-08T11:09:24Z) - Uni-COAL: A Unified Framework for Cross-Modality Synthesis and
Super-Resolution of MR Images [34.79786909094346]
Cross-modality synthesis (CMS), super-resolution (SR), and their combination (CMSR) have been extensively studied for magnetic resonance imaging (MRI)
None of the previous works is capable of performing CMS, SR, and CMSR using a unified network.
We propose a Unified Co-Modulated Alias-free framework (Uni-COAL) to accomplish the aforementioned tasks with a single network.
arXiv Detail & Related papers (2023-11-14T15:05:59Z) - Compound Attention and Neighbor Matching Network for Multi-contrast MRI
Super-resolution [7.197850827700436]
Multi-contrast super-resolution of MRI can achieve better results than single-image super-resolution.
We propose a novel network architecture with compound-attention and neighbor matching (CANM-Net) for multi-contrast MRI SR.
CANM-Net outperforms state-of-the-art approaches in both retrospective and prospective experiments.
arXiv Detail & Related papers (2023-07-05T09:44:02Z) - Implicit Diffusion Models for Continuous Super-Resolution [65.45848137914592]
This paper introduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution.
IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework.
The scaling factor regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output.
arXiv Detail & Related papers (2023-03-29T07:02:20Z) - DisC-Diff: Disentangled Conditional Diffusion Model for Multi-Contrast
MRI Super-Resolution [8.721585866050757]
We propose a conditional diffusion model, DisC-Diff, for multi-contrast brain MRI super-resolution.
DisC-Diff estimates uncertainty in restorations effectively and ensures a stable optimization process.
We validated the effectiveness of DisC-Diff on two datasets: the IXI dataset, which contains 578 normal brains, and a clinical dataset with 316 pathological brains.
arXiv Detail & Related papers (2023-03-24T11:42:45Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.