Bridging the gap between paired and unpaired medical image translation
- URL: http://arxiv.org/abs/2110.08407v1
- Date: Fri, 15 Oct 2021 23:15:12 GMT
- Title: Bridging the gap between paired and unpaired medical image translation
- Authors: Pauliina Paavilainen, Saad Ullah Akram, Juho Kannala
- Abstract summary: We introduce modified pix2pix models for tasks CT$rightarrow$MR and CT$rightarrow$CT, trained with unpaired CT and MR data, and MRCAT pairs generated from the MR scans.
The proposed modifications utilize the paired MR and MRCAT images to ensure good alignment between input and translated images, and unpaired CT images ensure the MR$rightarrow$CT model produces realistic-looking CT and CT$rightarrow$MR model works well with real CT as input.
- Score: 12.28777883776042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image translation has the potential to reduce the imaging workload,
by removing the need to capture some sequences, and to reduce the annotation
burden for developing machine learning methods. GANs have been used
successfully to translate images from one domain to another, such as MR to CT.
At present, paired data (registered MR and CT images) or extra supervision
(e.g. segmentation masks) is needed to learn good translation models.
Registering multiple modalities or annotating structures within each of them is
a tedious and laborious task. Thus, there is a need to develop improved
translation methods for unpaired data. Here, we introduce modified pix2pix
models for tasks CT$\rightarrow$MR and MR$\rightarrow$CT, trained with unpaired
CT and MR data, and MRCAT pairs generated from the MR scans. The proposed
modifications utilize the paired MR and MRCAT images to ensure good alignment
between input and translated images, and unpaired CT images ensure the
MR$\rightarrow$CT model produces realistic-looking CT and CT$\rightarrow$MR
model works well with real CT as input. The proposed pix2pix variants
outperform baseline pix2pix, pix2pixHD and CycleGAN in terms of FID and KID,
and generate more realistic looking CT and MR translations.
Related papers
- Denoising diffusion-based MRI to CT image translation enables automated
spinal segmentation [8.094450260464354]
This retrospective study involved translating T1w and T2w MR image series into CT images in a total of n=263 pairs of CT/MR series.
Two landmarks per vertebra registration enabled paired image-to-image translation from MR to CT and outperformed all unpaired approaches.
arXiv Detail & Related papers (2023-08-18T07:07:15Z) - Recurrence With Correlation Network for Medical Image Registration [66.63200823918429]
We present Recurrence with Correlation Network (RWCNet), a medical image registration network with multi-scale features and a cost volume layer.
We demonstrate that these architectural features improve medical image registration accuracy in two image registration datasets.
arXiv Detail & Related papers (2023-02-05T02:41:46Z) - Multi-scale Transformer Network with Edge-aware Pre-training for
Cross-Modality MR Image Synthesis [52.41439725865149]
Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones.
Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model.
We propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis.
arXiv Detail & Related papers (2022-12-02T11:40:40Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Self-supervised 3D anatomy segmentation using self-distilled masked
image transformer (SMIT) [2.7298989068857487]
Self-supervised learning has demonstrated success in medical image segmentation using convolutional networks.
We show our approach is more accurate and requires fewer fine tuning datasets than other pretext tasks.
arXiv Detail & Related papers (2022-05-20T17:55:14Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation [4.409836695738518]
We develop a new crossmodality educed distillation (CMEDL) approach, using unpaired CT and MRI scans.
Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks.
arXiv Detail & Related papers (2021-07-16T15:58:15Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Adversarial Uni- and Multi-modal Stream Networks for Multimodal Image
Registration [20.637787406888478]
Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies.
In this paper, we propose a novel translation-based unsupervised deformable image registration method.
Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.
arXiv Detail & Related papers (2020-07-06T14:44:06Z) - Structurally aware bidirectional unpaired image to image translation
between CT and MR [0.14788776577018314]
Deep learning techniques can help us to leverage the possibility of an image to image translation between multiple imaging modalities.
These techniques will help to conduct surgical planning under CT with the feedback of MRI information.
arXiv Detail & Related papers (2020-06-05T11:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.