Flow-based Deformation Guidance for Unpaired Multi-Contrast MRI
Image-to-Image Translation
- URL: http://arxiv.org/abs/2012.01777v1
- Date: Thu, 3 Dec 2020 09:10:22 GMT
- Title: Flow-based Deformation Guidance for Unpaired Multi-Contrast MRI
Image-to-Image Translation
- Authors: Toan Duc Bui, Manh Nguyen, Ngan Le, Khoa Luu
- Abstract summary: In this paper, we introduce a novel approach to unpaired image-to-image translation based on the invertible architecture.
We utilize the temporal information between consecutive slices to provide more constraints to the optimization for transforming one domain to another in unpaired medical images.
- Score: 7.8333615755210175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image synthesis from corrupted contrasts increases the diversity of
diagnostic information available for many neurological diseases. Recently the
image-to-image translation has experienced significant levels of interest
within medical research, beginning with the successful use of the Generative
Adversarial Network (GAN) to the introduction of cyclic constraint extended to
multiple domains. However, in current approaches, there is no guarantee that
the mapping between the two image domains would be unique or one-to-one. In
this paper, we introduce a novel approach to unpaired image-to-image
translation based on the invertible architecture. The invertible property of
the flow-based architecture assures a cycle-consistency of image-to-image
translation without additional loss functions. We utilize the temporal
information between consecutive slices to provide more constraints to the
optimization for transforming one domain to another in unpaired volumetric
medical images. To capture temporal structures in the medical images, we
explore the displacement between the consecutive slices using a deformation
field. In our approach, the deformation field is used as a guidance to keep the
translated slides realistic and consistent across the translation. The
experimental results have shown that the synthesized images using our proposed
approach are able to archive a competitive performance in terms of mean squared
error, peak signal-to-noise ratio, and structural similarity index when
compared with the existing deep learning-based methods on three standard
datasets, i.e. HCP, MRBrainS13, and Brats2019.
Related papers
- Anatomical Conditioning for Contrastive Unpaired Image-to-Image Translation of Optical Coherence Tomography Images [0.0]
We study the problem employing an optical coherence tomography ( OCT) data set of Spectralis- OCT and Home- OCT images.
I2I translation is challenging because the images are unpaired.
Our approach increases the similarity between the style-translated images and the target distribution.
arXiv Detail & Related papers (2024-04-08T11:20:28Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Unsupervised Medical Image Translation with Adversarial Diffusion Models [0.2770822269241974]
Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols.
Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation.
arXiv Detail & Related papers (2022-07-17T15:53:24Z) - Unsupervised Multi-Modal Medical Image Registration via
Discriminator-Free Image-to-Image Translation [4.43142018105102]
We propose a novel translation-based unsupervised deformable image registration approach to convert the multi-modal registration problem to a mono-modal one.
Our approach incorporates a discriminator-free translation network to facilitate the training of the registration network and a patchwise contrastive loss to encourage the translation network to preserve object shapes.
arXiv Detail & Related papers (2022-04-28T17:18:21Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - The Spatially-Correlative Loss for Various Image Translation Tasks [69.62228639870114]
We propose a novel spatially-correlative loss that is simple, efficient and yet effective for preserving scene structure consistency.
Previous methods attempt this by using pixel-level cycle-consistency or feature-level matching losses.
We show distinct improvement over baseline models in all three modes of unpaired I2I translation: single-modal, multi-modal, and even single-image translation.
arXiv Detail & Related papers (2021-04-02T02:13:30Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Segmentation-Renormalized Deep Feature Modulation for Unpaired Image
Harmonization [0.43012765978447565]
Cycle-consistent Generative Adversarial Networks have been used to harmonize image sets between a source and target domain.
These methods are prone to instability, contrast inversion, intractable manipulation of pathology, and steganographic mappings which limit their reliable adoption in real-world medical imaging.
We propose a segmentation-renormalized image translation framework to reduce inter-scanner harmonization while preserving anatomical layout.
arXiv Detail & Related papers (2021-02-11T23:53:51Z) - Image-to-image Mapping with Many Domains by Sparse Attribute Transfer [71.28847881318013]
Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
arXiv Detail & Related papers (2020-06-23T19:52:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.