Unsupervised learning of MRI tissue properties using MRI physics models
- URL: http://arxiv.org/abs/2107.02704v1
- Date: Tue, 6 Jul 2021 16:07:14 GMT
- Title: Unsupervised learning of MRI tissue properties using MRI physics models
- Authors: Divya Varadarajan, Katherine L. Bouman, Andre van der Kouwe, Bruce
Fischl, Adrian V. Dalca
- Abstract summary: Estimating tissue properties from a single scan session using a protocol available on all clinical scanners promises to reduce scan time and cost.
We propose an unsupervised deep-learning strategy that employs MRI physics to estimate all three tissue properties from a single multiecho MRI scan session.
We demonstrate improved accuracy and generalizability for tissue property estimation and MRI synthesis.
- Score: 10.979093424231532
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In neuroimaging, MRI tissue properties characterize underlying neurobiology,
provide quantitative biomarkers for neurological disease detection and
analysis, and can be used to synthesize arbitrary MRI contrasts. Estimating
tissue properties from a single scan session using a protocol available on all
clinical scanners promises to reduce scan time and cost, enable quantitative
analysis in routine clinical scans and provide scan-independent biomarkers of
disease. However, existing tissue properties estimation methods - most often
$\mathbf{T_1}$ relaxation, $\mathbf{T_2^*}$ relaxation, and proton density
($\mathbf{PD}$) - require data from multiple scan sessions and cannot estimate
all properties from a single clinically available MRI protocol such as the
multiecho MRI scan. In addition, the widespread use of non-standard acquisition
parameters across clinical imaging sites require estimation methods that can
generalize across varying scanner parameters. However, existing learning
methods are acquisition protocol specific and cannot estimate from heterogenous
clinical data from different imaging sites. In this work we propose an
unsupervised deep-learning strategy that employs MRI physics to estimate all
three tissue properties from a single multiecho MRI scan session, and
generalizes across varying acquisition parameters. The proposed strategy
optimizes accurate synthesis of new MRI contrasts from estimated latent tissue
properties, enabling unsupervised training, we also employ random acquisition
parameters during training to achieve acquisition generalization. We provide
the first demonstration of estimating all tissue properties from a single
multiecho scan session. We demonstrate improved accuracy and generalizability
for tissue property estimation and MRI synthesis.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Coordinate-Based Neural Representation Enabling Zero-Shot Learning for 3D Multiparametric Quantitative MRI [4.707353256136099]
We propose SUMMIT, an innovative imaging methodology that includes data acquisition and an unsupervised reconstruction for simultaneous multiparametric qMRI.
The proposed unsupervised approach for qMRI reconstruction also introduces a novel zero-shot learning paradigm for multiparametric imaging applicable to various medical imaging modalities.
arXiv Detail & Related papers (2024-10-02T14:13:06Z) - Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Recon-all-clinical: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI [3.639043225506316]
We introduce recon-all-clinical, a novel method for cortical reconstruction, registration, parcellation, and thickness estimation in brain MRI scans.
Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions.
We tested recon-all-clinical on multiple datasets, including over 19,000 clinical scans.
arXiv Detail & Related papers (2024-09-05T19:52:09Z) - Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs [1.779948689352186]
We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
arXiv Detail & Related papers (2024-07-15T16:38:59Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Integrative Imaging Informatics for Cancer Research: Workflow Automation
for Neuro-oncology (I3CR-WANO) [0.12175619840081271]
We propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-Oncology MRI data.
Our end-to-end framework i) classifies MRI sequences using an ensemble classifier, ii) preprocesses the data in a reproducible manner, and iv) delineates tumor tissue subtypes.
It is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists.
arXiv Detail & Related papers (2022-10-06T18:23:42Z) - SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image
Labels for Quantitative Clinical Evaluation [5.37260403457093]
We present the Stanford Knee MRI with Multi-Task Evaluation dataset, a collection of quantitative knee MRI (qMRI) scans.
This dataset consists of raw-data measurements of 25,000 slices (155 patients) of anonymized patient MRI scans.
We provide a framework for using qMRI parameter maps, along with image reconstructions and dense image labels, for measuring the quality of qMRI biomarker estimates extracted from MRI reconstruction, segmentation, and detection techniques.
arXiv Detail & Related papers (2022-03-14T02:40:40Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.