MI^2GAN: Generative Adversarial Network for Medical Image Domain
Adaptation using Mutual Information Constraint
- URL: http://arxiv.org/abs/2007.11180v2
- Date: Thu, 30 Jul 2020 07:57:03 GMT
- Title: MI^2GAN: Generative Adversarial Network for Medical Image Domain
Adaptation using Mutual Information Constraint
- Authors: Xinpeng Xie, Jiawei Chen, Yuexiang Li, Linlin Shen, Kai Ma and Yefeng
Zheng
- Abstract summary: We propose a novel GAN to maintain image-contents during cross-domain I2I translation.
Particularly, we disentangle the content features from domain information for both the source and translated images.
The proposed MI$2$GAN is evaluated on two tasks---polyp segmentation using colonoscopic images and the segmentation of optic disc and cup in fundus images.
- Score: 47.07869311690419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain shift between medical images from multicentres is still an open
question for the community, which degrades the generalization performance of
deep learning models. Generative adversarial network (GAN), which synthesize
plausible images, is one of the potential solutions to address the problem.
However, the existing GAN-based approaches are prone to fail at preserving
image-objects in image-to-image (I2I) translation, which reduces their
practicality on domain adaptation tasks. In this paper, we propose a novel GAN
(namely MI$^2$GAN) to maintain image-contents during cross-domain I2I
translation. Particularly, we disentangle the content features from domain
information for both the source and translated images, and then maximize the
mutual information between the disentangled content features to preserve the
image-objects. The proposed MI$^2$GAN is evaluated on two tasks---polyp
segmentation using colonoscopic images and the segmentation of optic disc and
cup in fundus images. The experimental results demonstrate that the proposed
MI$^2$GAN can not only generate elegant translated images, but also
significantly improve the generalization performance of widely used deep
learning networks (e.g., U-Net).
Related papers
- I2I-Galip: Unsupervised Medical Image Translation Using Generative Adversarial CLIP [30.506544165999564]
Unpaired image-to-image translation is a challenging task due to the absence of paired examples.
We propose a new image-to-image translation framework named Image-to-Image-Generative-Adversarial-CLIP (I2I-Galip)
arXiv Detail & Related papers (2024-09-19T01:44:50Z) - The Dawn of KAN in Image-to-Image (I2I) Translation: Integrating Kolmogorov-Arnold Networks with GANs for Unpaired I2I Translation [0.0]
Kolmogorov-Arnold Network (KAN) can effectively replace the Multi-layer Perceptron (MLP) method in generative AI.
Work suggests KAN could be a valuable component in the broader generative AI domain.
arXiv Detail & Related papers (2024-08-15T15:26:12Z) - SCONE-GAN: Semantic Contrastive learning-based Generative Adversarial
Network for an end-to-end image translation [18.93434486338439]
SCONE-GAN is shown to be effective for learning to generate realistic and diverse scenery images.
For more realistic and diverse image generation we introduce style reference image.
We validate the proposed algorithm for image-to-image translation and stylizing outdoor images.
arXiv Detail & Related papers (2023-11-07T10:29:16Z) - Guided Image-to-Image Translation by Discriminator-Generator
Communication [71.86347329356244]
The goal of Image-to-image (I2I) translation is to transfer an image from a source domain to a target domain.
One major branch of this research is to formulate I2I translation based on Generative Adversarial Network (GAN)
arXiv Detail & Related papers (2023-03-07T02:29:36Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Multimodal Image-to-Image Translation via Mutual Information Estimation
and Maximization [16.54980086211836]
Multimodal image-to-image translation (I2IT) aims to learn a conditional distribution that explores multiple possible images in the target domain given an input image in the source domain.
Conditional generative adversarial networks (cGANs) are often adopted for modeling such a conditional distribution.
We propose a method that explicitly estimates and maximizes the mutual information between the latent code and the output image in cGANs.
arXiv Detail & Related papers (2020-08-08T14:09:23Z) - GMM-UNIT: Unsupervised Multi-Domain and Multi-Modal Image-to-Image
Translation via Attribute Gaussian Mixture Modeling [66.50914391679375]
Unsupervised image-to-image translation (UNIT) aims at learning a mapping between several visual domains by using unpaired training images.
Recent studies have shown remarkable success for multiple domains but they suffer from two main limitations.
We propose a method named GMM-UNIT, which is based on a content-attribute disentangled representation where the space is fitted with a GMM.
arXiv Detail & Related papers (2020-03-15T10:18:56Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.