Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes
from clinical MRI exams with scans of different orientation, resolution and
contrast
- URL: http://arxiv.org/abs/2012.13340v1
- Date: Thu, 24 Dec 2020 17:29:53 GMT
- Title: Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes
from clinical MRI exams with scans of different orientation, resolution and
contrast
- Authors: Juan Eugenio Iglesias, Benjamin Billot, Yael Balbastre, Azadeh Tabari,
John Conklin, Daniel C. Alexander, Polina Golland, Brian L. Edlow, Bruce
Fischl
- Abstract summary: We present SynthSR, a method to train a CNN that receives one or more thick-slice scans with different contrast, resolution and orientation.
The presented method does not require any preprocessing, e.g., stripping or bias field correction.
- Score: 4.987889348212769
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most existing algorithms for automatic 3D morphometry of human brain MRI
scans are designed for data with near-isotropic voxels at approximately 1 mm
resolution, and frequently have contrast constraints as well - typically
requiring T1 scans (e.g., MP-RAGE). This limitation prevents the analysis of
millions of MRI scans acquired with large inter-slice spacing ("thick slice")
in clinical settings every year. The inability to quantitatively analyze these
scans hinders the adoption of quantitative neuroimaging in healthcare, and
precludes research studies that could attain huge sample sizes and hence
greatly improve our understanding of the human brain. Recent advances in CNNs
are producing outstanding results in super-resolution and contrast synthesis of
MRI. However, these approaches are very sensitive to the contrast, resolution
and orientation of the input images, and thus do not generalize to diverse
clinical acquisition protocols - even within sites. Here we present SynthSR, a
method to train a CNN that receives one or more thick-slice scans with
different contrast, resolution and orientation, and produces an isotropic scan
of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not
require any preprocessing, e.g., skull stripping or bias field correction.
Crucially, SynthSR trains on synthetic input images generated from 3D
segmentations, and can thus be used to train CNNs for any combination of
contrasts, resolutions and orientations without high-resolution training data.
We test the images generated with SynthSR in an array of common downstream
analyses, and show that they can be reliably used for subcortical segmentation
and volumetry, image registration (e.g., for tensor-based morphometry), and, if
some image quality requirements are met, even cortical thickness morphometry.
The source code is publicly available at github.com/BBillot/SynthSR.
Related papers
- A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Recon-all-clinical: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI [3.639043225506316]
We introduce recon-all-clinical, a novel method for cortical reconstruction, registration, parcellation, and thickness estimation in brain MRI scans.
Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions.
We tested recon-all-clinical on multiple datasets, including over 19,000 clinical scans.
arXiv Detail & Related papers (2024-09-05T19:52:09Z) - CoNeS: Conditional neural fields with shift modulation for multi-sequence MRI translation [5.662694302758443]
Multi-sequence magnetic resonance imaging (MRI) has found wide applications in both modern clinical studies and deep learning research.
It frequently occurs that one or more of the MRI sequences are missing due to different image acquisition protocols or contrast agent contraindications of patients.
One promising approach is to leverage generative models to synthesize the missing sequences, which can serve as a surrogate acquisition.
arXiv Detail & Related papers (2023-09-06T19:01:58Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Cortical analysis of heterogeneous clinical brain MRI scans for
large-scale neuroimaging studies [2.930354460501359]
Surface analysis of the cortex is ubiquitous in human neuroimaging with MRI, e.g., for cortical registration, parcellation, or thickness estimation.
Here we present the first method for cortical reconstruction, registration, parcellation, and thickness estimation for clinical brain MRI scans of any resolution and pulse sequence.
arXiv Detail & Related papers (2023-05-02T23:36:06Z) - Single-subject Multi-contrast MRI Super-resolution via Implicit Neural
Representations [9.683341998041634]
Implicit Neural Representations (INR) proposed to learn two different contrasts of complementary views in a continuous spatial function.
Our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets.
arXiv Detail & Related papers (2023-03-27T10:18:42Z) - Robust Segmentation of Brain MRI in the Wild with Hierarchical CNNs and
no Retraining [1.0499611180329802]
Retrospective analysis of brain MRI scans acquired in the clinic has the potential to enable neuroimaging studies with sample sizes much larger than those found in research datasets.
Recent advances in convolutional neural networks (CNNs) and domain randomisation for image segmentation may enable morphometry of clinical MRI at scale.
We show that SynthSeg is generally robust, but frequently falters on scans with low signal-to-noise ratio or poor tissue contrast.
We propose SynthSeg+, a novel method that greatly mitigates these problems using a hierarchy of conditional segmentation and denoising CNNs.
arXiv Detail & Related papers (2022-03-03T19:18:28Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep
learning [47.68307909984442]
Single Image Super-Resolution (SISR) is a technique aimed to obtain high-resolution (HR) details from one single low-resolution input image.
Deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts.
arXiv Detail & Related papers (2021-02-25T14:52:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.