DC-cycleGAN: Bidirectional CT-to-MR Synthesis from Unpaired Data
- URL: http://arxiv.org/abs/2211.01293v1
- Date: Wed, 2 Nov 2022 17:16:28 GMT
- Title: DC-cycleGAN: Bidirectional CT-to-MR Synthesis from Unpaired Data
- Authors: Jiayuan Wang, Q. M. Jonathan Wu, Farhad Pourpanah
- Abstract summary: We propose a bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN), to synthesis medical images from unpaired data.
The experimental results indicate that DC-cycleGAN is able to produce promising results as compared with other cycleGAN-based medical image synthesis methods.
- Score: 22.751911825379626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic resonance (MR) and computer tomography (CT) images are two typical
types of medical images that provide mutually-complementary information for
accurate clinical diagnosis and treatment. However, obtaining both images may
be limited due to some considerations such as cost, radiation dose and modality
missing. Recently, medical image synthesis has aroused gaining research
interest to cope with this limitation. In this paper, we propose a
bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN),
to synthesis medical images from unpaired data. Specifically, a dual contrast
loss is introduced into the discriminators to indirectly build constraints
between MR and CT images by taking the advantage of samples from the source
domain as negative sample and enforce the synthetic images fall far away from
the source domain. In addition, cross entropy and structural similarity index
(SSIM) are integrated into the cycleGAN in order to consider both luminance and
structure of samples when synthesizing images. The experimental results
indicates that DC-cycleGAN is able to produce promising results as compared
with other cycleGAN-based medical image synthesis methods such as cycleGAN,
RegGAN, DualGAN and NiceGAN. The code will be available at
https://github.com/JiayuanWang-JW/DC-cycleGAN.
Related papers
- Deformation-aware GAN for Medical Image Synthesis with Substantially Misaligned Pairs [0.0]
We propose a novel Deformation-aware GAN (DA-GAN) to dynamically correct the misalignment during the image synthesis based on inverse consistency.
Experimental results show that DA-GAN achieved superior performance on a public dataset with simulated misalignments and a real-world lung MRI-CT dataset with respiratory motion misalignment.
arXiv Detail & Related papers (2024-08-18T10:29:35Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - An Attentive-based Generative Model for Medical Image Synthesis [18.94900480135376]
We propose an attention-based dual contrast generative model, called ADC-cycleGAN, which can synthesize medical images from unpaired data with multiple slices.
The model integrates a dual contrast loss term with the CycleGAN loss to ensure that the synthesized images are distinguishable from the source domain.
Experimental results demonstrate that the proposed ADC-cycleGAN model produces comparable samples to other state-of-the-art generative models.
arXiv Detail & Related papers (2023-06-02T14:17:37Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - High-fidelity Direct Contrast Synthesis from Magnetic Resonance
Fingerprinting [28.702553164811473]
We propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation.
In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics.
arXiv Detail & Related papers (2022-12-21T07:11:39Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - Structurally aware bidirectional unpaired image to image translation
between CT and MR [0.14788776577018314]
Deep learning techniques can help us to leverage the possibility of an image to image translation between multiple imaging modalities.
These techniques will help to conduct surgical planning under CT with the feedback of MRI information.
arXiv Detail & Related papers (2020-06-05T11:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.