Structurally aware bidirectional unpaired image to image translation
between CT and MR
- URL: http://arxiv.org/abs/2006.03374v1
- Date: Fri, 5 Jun 2020 11:21:56 GMT
- Title: Structurally aware bidirectional unpaired image to image translation
between CT and MR
- Authors: Vismay Agrawal, Avinash Kori, Vikas Kumar Anand, and Ganapathy
Krishnamurthi
- Abstract summary: Deep learning techniques can help us to leverage the possibility of an image to image translation between multiple imaging modalities.
These techniques will help to conduct surgical planning under CT with the feedback of MRI information.
- Score: 0.14788776577018314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic Resonance (MR) Imaging and Computed Tomography (CT) are the primary
diagnostic imaging modalities quite frequently used for surgical planning and
analysis. A general problem with medical imaging is that the acquisition
process is quite expensive and time-consuming. Deep learning techniques like
generative adversarial networks (GANs) can help us to leverage the possibility
of an image to image translation between multiple imaging modalities, which in
turn helps in saving time and cost. These techniques will help to conduct
surgical planning under CT with the feedback of MRI information. While previous
studies have shown paired and unpaired image synthesis from MR to CT, image
synthesis from CT to MR still remains a challenge, since it involves the
addition of extra tissue information. In this manuscript, we have implemented
two different variations of Generative Adversarial Networks exploiting the
cycling consistency and structural similarity between both CT and MR image
modalities on a pelvis dataset, thus facilitating a bidirectional exchange of
content and style between these image modalities. The proposed GANs translate
the input medical images by different mechanisms, and hence generated images
not only appears realistic but also performs well across various comparison
metrics, and these images have also been cross verified with a radiologist. The
radiologist verification has shown that slight variations in generated MR and
CT images may not be exactly the same as their true counterpart but it can be
used for medical purposes.
Related papers
- Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs [1.779948689352186]
We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
arXiv Detail & Related papers (2024-07-15T16:38:59Z) - Enhancing CT Image synthesis from multi-modal MRI data based on a
multi-task neural network framework [16.864720020158906]
We propose a versatile multi-task neural network framework, based on an enhanced Transformer U-Net architecture.
We decompose the traditional problem of synthesizing CT images into distinct subtasks.
To enhance the framework's versatility in handling multi-modal data, we expand the model with multiple image channels.
arXiv Detail & Related papers (2023-12-13T18:22:38Z) - Enhanced Synthetic MRI Generation from CT Scans Using CycleGAN with
Feature Extraction [3.2088888904556123]
We propose an approach for enhanced monomodal registration using synthetic MRI images from CT scans.
Our methodology shows promising results, outperforming several state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T16:39:56Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Context-Aware Transformers For Spinal Cancer Detection and Radiological
Grading [70.04389979779195]
This paper proposes a novel transformer-based model architecture for medical imaging problems involving analysis of vertebrae.
It considers two applications of such models in MR images: (a) detection of spinal metastases and the related conditions of vertebral fractures and metastatic cord compression.
We show that by considering the context of vertebral bodies in the image, SCT improves the accuracy for several gradings compared to previously published model.
arXiv Detail & Related papers (2022-06-27T10:31:03Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Bridging the gap between paired and unpaired medical image translation [12.28777883776042]
We introduce modified pix2pix models for tasks CT$rightarrow$MR and CT$rightarrow$CT, trained with unpaired CT and MR data, and MRCAT pairs generated from the MR scans.
The proposed modifications utilize the paired MR and MRCAT images to ensure good alignment between input and translated images, and unpaired CT images ensure the MR$rightarrow$CT model produces realistic-looking CT and CT$rightarrow$MR model works well with real CT as input.
arXiv Detail & Related papers (2021-10-15T23:15:12Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.