Quantitative mapping from conventional MRI using self-supervised physics-guided deep learning: applications to a large-scale, clinically heterogeneous dataset
- URL: http://arxiv.org/abs/2601.05063v1
- Date: Thu, 08 Jan 2026 16:08:58 GMT
- Title: Quantitative mapping from conventional MRI using self-supervised physics-guided deep learning: applications to a large-scale, clinically heterogeneous dataset
- Authors: Jelmer van Lune, Stefano Mandija, Oscar van der Heide, Matteo Maspero, Martin B. Schilder, Jan Willem Dankbaar, Cornelis A. T. van den Berg, Alessandro Sbrizzi,
- Abstract summary: This study presents a self-supervised physics-guided deep learning framework to infer quantitative T1, T2, and proton-density maps.<n>The framework was trained and evaluated on a large-scale, clinically heterogeneous dataset.
- Score: 32.995373978092665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic resonance imaging (MRI) is a cornerstone of clinical neuroimaging, yet conventional MRIs provide qualitative information heavily dependent on scanner hardware and acquisition settings. While quantitative MRI (qMRI) offers intrinsic tissue parameters, the requirement for specialized acquisition protocols and reconstruction algorithms restricts its availability and impedes large-scale biomarker research. This study presents a self-supervised physics-guided deep learning framework to infer quantitative T1, T2, and proton-density (PD) maps directly from widely available clinical conventional T1-weighted, T2-weighted, and FLAIR MRIs. The framework was trained and evaluated on a large-scale, clinically heterogeneous dataset comprising 4,121 scan sessions acquired at our institution over six years on four different 3 T MRI scanner systems, capturing real-world clinical variability. The framework integrates Bloch-based signal models directly into the training objective. Across more than 600 test sessions, the generated maps exhibited white matter and gray matter values consistent with literature ranges. Additionally, the generated maps showed invariance to scanner hardware and acquisition protocol groups, with inter-group coefficients of variation $\leq$ 1.1%. Subject-specific analyses demonstrated excellent voxel-wise reproducibility across scanner systems and sequence parameters, with Pearson $r$ and concordance correlation coefficients exceeding 0.82 for T1 and T2. Mean relative voxel-wise differences were low across all quantitative parameters, especially for T2 ($<$ 6%). These results indicate that the proposed framework can robustly transform diverse clinical conventional MRI data into quantitative maps, potentially paving the way for large-scale quantitative biomarker research.
Related papers
- Self-Supervised Weighted Image Guided Quantitative MRI Super-Resolution [0.4757311250629737]
High-resolution (HR) quantitative MRI (qMRI) relaxometry provides objective tissue characterization but remains clinically underutilized due to lengthy acquisition times.<n>We propose a physics-informed, self-supervised framework for qMRI super-resolution that uses routinely acquired HR weighted MRI (wMRI) scans as guidance.
arXiv Detail & Related papers (2025-12-19T14:15:31Z) - Pattern-Aware Diffusion Synthesis of fMRI/dMRI with Tissue and Microstructural Refinement [34.55493442995441]
We propose PDS, a pattern-aware dual-modal 3D diffusion framework for cross-modality learning.<n>We also introduce a tissue refinement network integrated with a efficient microstructure refinement to maintain structural fidelity and fine details.<n>PDS achieves state-of-the-art results, with PSNR/SSIM scores of 29.83 dB/90.84% for fMRI synthesis and 30.00 dB/77.55% for dMRI synthesis.
arXiv Detail & Related papers (2025-11-07T03:51:00Z) - A Physics-Driven Neural Network with Parameter Embedding for Generating Quantitative MR Maps from Weighted Images [25.060349614170953]
We propose a deep learning-based approach that integrates MRI sequence parameters to improve the accuracy and generalizability of quantitative image synthesis from clinical weighted MRI.<n>Our physics-driven neural network embeds MRI sequence parameters directly into the model via parameter embedding.
arXiv Detail & Related papers (2025-08-11T16:01:12Z) - Coordinate-Based Neural Representation Enabling Zero-Shot Learning for 3D Multiparametric Quantitative MRI [4.707353256136099]
We propose SUMMIT, an innovative imaging methodology that includes data acquisition and an unsupervised reconstruction for simultaneous multiparametric qMRI.
The proposed unsupervised approach for qMRI reconstruction also introduces a novel zero-shot learning paradigm for multiparametric imaging applicable to various medical imaging modalities.
arXiv Detail & Related papers (2024-10-02T14:13:06Z) - Recon-all-clinical: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI [3.639043225506316]
We introduce recon-all-clinical, a novel method for cortical reconstruction, registration, parcellation, and thickness estimation in brain MRI scans.
Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions.
We tested recon-all-clinical on multiple datasets, including over 19,000 clinical scans.
arXiv Detail & Related papers (2024-09-05T19:52:09Z) - CMRxRecon2024: A Multi-Modality, Multi-View K-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI [40.11088079783521]
The CMRxRecon2024 dataset is the largest and most protocal-diverse publicly available cardiac k-space dataset.<n>It is acquired from 330 healthy volunteers, covering commonly used modalities, anatomical views, and acquisition trajectories in clinical cardiac MRI.
arXiv Detail & Related papers (2024-06-27T09:50:20Z) - Learned Local Attention Maps for Synthesising Vessel Segmentations [43.314353195417326]
We present an encoder-decoder model for synthesising segmentations of the main cerebral arteries in the circle of Willis (CoW) from only T2 MRI.
It uses learned local attention maps generated by dilating the segmentation labels, which forces the network to only extract information from the T2 MRI relevant to synthesising the CoW.
arXiv Detail & Related papers (2023-08-24T15:32:27Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Generalizable synthetic MRI with physics-informed convolutional networks [57.628770497971246]
We develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition.
We investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols.
arXiv Detail & Related papers (2023-05-21T21:16:20Z) - Unsupervised learning of MRI tissue properties using MRI physics models [10.979093424231532]
Estimating tissue properties from a single scan session using a protocol available on all clinical scanners promises to reduce scan time and cost.
We propose an unsupervised deep-learning strategy that employs MRI physics to estimate all three tissue properties from a single multiecho MRI scan session.
We demonstrate improved accuracy and generalizability for tissue property estimation and MRI synthesis.
arXiv Detail & Related papers (2021-07-06T16:07:14Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.