Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs
- URL: http://arxiv.org/abs/2407.10888v1
- Date: Mon, 15 Jul 2024 16:38:59 GMT
- Title: Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs
- Authors: Leonardo Crespi, Samuele Camnasio, Damiano Dei, Nicola Lambri, Pietro Mancosu, Marta Scorsetti, Daniele Loiacono,
- Abstract summary: We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
- Score: 1.779948689352186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many clinical settings, the use of both Computed Tomography (CT) and Magnetic Resonance (MRI) is necessary to pursue a thorough understanding of the patient's anatomy and to plan a suitable therapeutical strategy; this is often the case in MRI-based radiotherapy, where CT is always necessary to prepare the dose delivery, as it provides the essential information about the radiation absorption properties of the tissues. Sometimes, MRI is preferred to contour the target volumes. However, this approach is often not the most efficient, as it is more expensive, time-consuming and, most importantly, stressful for the patients. To overcome this issue, in this work, we analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI, leveraging the power of Generative Adversarial Networks (GANs) and, in particular, the CycleGAN architecture, capable of working in an unsupervised manner and without paired images, which were not available. Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents. To overcome the problem of not having a ground truth, distribution-based metrics were used to assess the model's performance quantitatively, together with a qualitative evaluation where physicians were asked to differentiate between real and synthetic images to understand how realistic the generated images were. The results show how, depending on the input modalities, the models can have very different performances; however, models with the best quantitative results, according to the distribution-based metrics used, can generate very difficult images to distinguish from the real ones, even for physicians, demonstrating the approach's potential.
Related papers
- A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Enhanced Synthetic MRI Generation from CT Scans Using CycleGAN with
Feature Extraction [3.2088888904556123]
We propose an approach for enhanced monomodal registration using synthetic MRI images from CT scans.
Our methodology shows promising results, outperforming several state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T16:39:56Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Conversion Between CT and MRI Images Using Diffusion and Score-Matching
Models [7.745729132928934]
We propose to use an emerging deep learning framework called diffusion and score-matching models.
Our results show that the diffusion and score-matching models generate better synthetic CT images than the CNN and GAN models.
Our study suggests that diffusion and score-matching models are powerful to generate high quality images conditioned on an image obtained using a complementary imaging modality.
arXiv Detail & Related papers (2022-09-24T23:50:54Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - MIST GAN: Modality Imputation Using Style Transfer for MRI [0.49172272348627766]
We formulate generating the missing MR modality from existing MR modalities as an imputation problem using style transfer.
With a multiple-to-one mapping, we model a network that accommodates domain specific styles in generating the target image.
Our model is tested on the BraTS'18 dataset and the results are observed to be on par with the state-of-the-art in terms of visual metrics.
arXiv Detail & Related papers (2022-02-21T17:50:40Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - Structurally aware bidirectional unpaired image to image translation
between CT and MR [0.14788776577018314]
Deep learning techniques can help us to leverage the possibility of an image to image translation between multiple imaging modalities.
These techniques will help to conduct surgical planning under CT with the feedback of MRI information.
arXiv Detail & Related papers (2020-06-05T11:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.