Distillation-Driven Diffusion Model for Multi-Scale MRI Super-Resolution: Make 1.5T MRI Great Again
- URL: http://arxiv.org/abs/2501.18736v1
- Date: Thu, 30 Jan 2025 20:21:11 GMT
- Title: Distillation-Driven Diffusion Model for Multi-Scale MRI Super-Resolution: Make 1.5T MRI Great Again
- Authors: Zhe Wang, Yuhua Ru, Fabian Bauer, Aladine Chetouani, Fang Chen, Liping Zhang, Didier Hans, Rachid Jennane, Mohamed Jarraya, Yung Hsin Chen,
- Abstract summary: 7T MRI provides significantly enhanced spatial resolution, enabling finer visualization of anatomical structures.
Super-Resolution (SR) model is proposed to generate 7T-like MRI from standard 1.5T MRI scans.
Student model refines the 7T SR task with steps, leveraging feature maps from the inference phase of the teacher model as guidance.
- Score: 8.193689534916988
- License:
- Abstract: Magnetic Resonance Imaging (MRI) offers critical insights into microstructural details, however, the spatial resolution of standard 1.5T imaging systems is often limited. In contrast, 7T MRI provides significantly enhanced spatial resolution, enabling finer visualization of anatomical structures. Though this, the high cost and limited availability of 7T MRI hinder its widespread use in clinical settings. To address this challenge, a novel Super-Resolution (SR) model is proposed to generate 7T-like MRI from standard 1.5T MRI scans. Our approach leverages a diffusion-based architecture, incorporating gradient nonlinearity correction and bias field correction data from 7T imaging as guidance. Moreover, to improve deployability, a progressive distillation strategy is introduced. Specifically, the student model refines the 7T SR task with steps, leveraging feature maps from the inference phase of the teacher model as guidance, aiming to allow the student model to achieve progressively 7T SR performance with a smaller, deployable model size. Experimental results demonstrate that our baseline teacher model achieves state-of-the-art SR performance. The student model, while lightweight, sacrifices minimal performance. Furthermore, the student model is capable of accepting MRI inputs at varying resolutions without the need for retraining, significantly further enhancing deployment flexibility. The clinical relevance of our proposed method is validated using clinical data from Massachusetts General Hospital. Our code is available at https://github.com/ZWang78/SR.
Related papers
- LDPM: Towards undersampled MRI reconstruction with MR-VAE and Latent Diffusion Prior [2.3007720628527104]
A Latent Diffusion Prior based undersampled MRI reconstruction (LDPM) method is proposed.
A sketcher module is utilized to provide appropriate control and balance the quality and fidelity of the reconstructed MR images.
A VAE adapted for MRI tasks (MR-VAE) is explored, which can serve as the backbone for future MR-related tasks.
arXiv Detail & Related papers (2024-11-05T09:51:59Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - When Diffusion MRI Meets Diffusion Model: A Novel Deep Generative Model for Diffusion MRI Generation [9.330836344638731]
We propose a novel generative approach to perform dMRI generation using deep diffusion models.
It can generate high dimension (4D) and high resolution data preserving the gradients information and brain structure.
Our approach demonstrates highly enhanced performance in generating dMRI images when compared to the current state-of-the-art methods.
arXiv Detail & Related papers (2024-08-23T08:03:15Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Transferring Ultrahigh-Field Representations for Intensity-Guided Brain
Segmentation of Low-Field Magnetic Resonance Imaging [51.92395928517429]
The use of 7T MRI is limited by its high cost and lower accessibility compared to low-field (LF) MRI.
This study proposes a deep-learning framework that fuses the input LF magnetic resonance feature representations with the inferred 7T-like feature representations for brain image segmentation tasks.
arXiv Detail & Related papers (2024-02-13T12:21:06Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - CL-MRI: Self-Supervised Contrastive Learning to Improve the Accuracy of Undersampled MRI Reconstruction [25.078280843551322]
We introduce a self-supervised pretraining procedure using contrastive learning to improve the accuracy of undersampled MRI reconstruction.
Our experiments demonstrate improved reconstruction accuracy across a range of acceleration factors and datasets.
arXiv Detail & Related papers (2023-06-01T10:29:58Z) - Iterative Data Refinement for Self-Supervised MR Image Reconstruction [18.02961646651716]
We propose a data refinement framework for self-supervised MR image reconstruction.
We first analyze the reason of the performance gap between self-supervised and supervised methods.
Then, we design an effective self-supervised training data refinement method to reduce this data bias.
arXiv Detail & Related papers (2022-11-24T06:57:16Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep
learning [47.68307909984442]
Single Image Super-Resolution (SISR) is a technique aimed to obtain high-resolution (HR) details from one single low-resolution input image.
Deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts.
arXiv Detail & Related papers (2021-02-25T14:52:23Z) - Fine-tuning deep learning model parameters for improved super-resolution
of dynamic MRI with prior-knowledge [0.3914676152740142]
This research presents a super-resolution (SR) MRI reconstruction with prior knowledge based fine-tuning to maximise spatial information.
An U-Net based network with loss is trained on a benchmark and fine-tuned using one subject-specific static high resolution MRI as prior knowledge to obtain high resolution dynamic images.
arXiv Detail & Related papers (2021-02-04T16:11:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.