Single-subject Multi-contrast MRI Super-resolution via Implicit Neural
Representations
- URL: http://arxiv.org/abs/2303.15065v3
- Date: Fri, 5 Jan 2024 00:48:20 GMT
- Title: Single-subject Multi-contrast MRI Super-resolution via Implicit Neural
Representations
- Authors: Julian McGinnis, Suprosanna Shit, Hongwei Bran Li, Vasiliki
Sideri-Lampretsa, Robert Graf, Maik Dannecker, Jiazhen Pan, Nil Stolt Ans\'o,
Mark M\"uhlau, Jan S. Kirschke, Daniel Rueckert, Benedikt Wiestler
- Abstract summary: Implicit Neural Representations (INR) proposed to learn two different contrasts of complementary views in a continuous spatial function.
Our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets.
- Score: 9.683341998041634
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Clinical routine and retrospective cohorts commonly include multi-parametric
Magnetic Resonance Imaging; however, they are mostly acquired in different
anisotropic 2D views due to signal-to-noise-ratio and scan-time constraints.
Thus acquired views suffer from poor out-of-plane resolution and affect
downstream volumetric image analysis that typically requires isotropic 3D
scans. Combining different views of multi-contrast scans into high-resolution
isotropic 3D scans is challenging due to the lack of a large training cohort,
which calls for a subject-specific framework. This work proposes a novel
solution to this problem leveraging Implicit Neural Representations (INR). Our
proposed INR jointly learns two different contrasts of complementary views in a
continuous spatial function and benefits from exchanging anatomical information
between them. Trained within minutes on a single commodity GPU, our model
provides realistic super-resolution across different pairs of contrasts in our
experiments with three datasets. Using Mutual Information (MI) as a metric, we
find that our model converges to an optimum MI amongst sequences, achieving
anatomically faithful reconstruction. Code is available at:
https://github.com/jqmcginnis/multi_contrast_inr/
Related papers
- CycleINR: Cycle Implicit Neural Representation for Arbitrary-Scale Volumetric Super-Resolution of Medical Data [19.085329423308938]
CycleINR is a novel enhanced Implicit Neural Representation model for 3D medical data super-resolution.
We introduce a new metric, Slice-wise Noise Level Inconsistency (SNLI), to quantitatively assess inter-slice noise level inconsistency.
arXiv Detail & Related papers (2024-04-07T08:48:01Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Dual Arbitrary Scale Super-Resolution for Multi-Contrast MRI [23.50915512118989]
Multi-contrast Super-Resolution (SR) reconstruction is promising to yield SR images with higher quality.
radiologists are accustomed to zooming the MR images at arbitrary scales rather than using a fixed scale.
We propose an implicit neural representations based dual-arbitrary multi-contrast MRI super-resolution method, called Dual-ArbNet.
arXiv Detail & Related papers (2023-07-05T14:43:26Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - DisC-Diff: Disentangled Conditional Diffusion Model for Multi-Contrast
MRI Super-Resolution [8.721585866050757]
We propose a conditional diffusion model, DisC-Diff, for multi-contrast brain MRI super-resolution.
DisC-Diff estimates uncertainty in restorations effectively and ensures a stable optimization process.
We validated the effectiveness of DisC-Diff on two datasets: the IXI dataset, which contains 578 normal brains, and a clinical dataset with 316 pathological brains.
arXiv Detail & Related papers (2023-03-24T11:42:45Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Exploring Separable Attention for Multi-Contrast MR Image
Super-Resolution [88.16655157395785]
We propose a separable attention network (comprising a priority attention and background separation attention) named SANet.
It can explore the foreground and background areas in the forward and reverse directions with the help of the auxiliary contrast.
It is the first model to explore a separable attention mechanism that uses the auxiliary contrast to predict the foreground and background regions.
arXiv Detail & Related papers (2021-09-03T05:53:07Z) - Joint Semi-supervised 3D Super-Resolution and Segmentation with Mixed
Adversarial Gaussian Domain Adaptation [13.477290490742224]
Super-resolution in medical imaging aims to increase the resolution of images but is conventionally trained on features from low resolution datasets.
Here we propose a semi-supervised multi-task generative adversarial network (Gemini-GAN) that performs joint super-resolution of the images and their labels.
Our proposed approach is extensively evaluated on two transnational multi-ethnic populations of 1,331 and 205 adults respectively.
arXiv Detail & Related papers (2021-07-16T15:42:39Z) - Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes
from clinical MRI exams with scans of different orientation, resolution and
contrast [4.987889348212769]
We present SynthSR, a method to train a CNN that receives one or more thick-slice scans with different contrast, resolution and orientation.
The presented method does not require any preprocessing, e.g., stripping or bias field correction.
arXiv Detail & Related papers (2020-12-24T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.