Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential
Generative Adversarial Networks
- URL: http://arxiv.org/abs/2308.14066v2
- Date: Tue, 29 Aug 2023 04:59:41 GMT
- Title: Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential
Generative Adversarial Networks
- Authors: Xin Yang, Yi Lin, Zhiwei Wang, Xin Li, Kwang-Ting Cheng
- Abstract summary: We propose a bi-modality medical image synthesis approach based on sequential generative adversarial network (GAN) and semi-supervised learning.
Our approach consists of two generative modules that synthesize images of the two modalities in a sequential order.
Visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods.
- Score: 35.358653509217994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a bi-modality medical image synthesis approach
based on sequential generative adversarial network (GAN) and semi-supervised
learning. Our approach consists of two generative modules that synthesize
images of the two modalities in a sequential order. A method for measuring the
synthesis complexity is proposed to automatically determine the synthesis order
in our sequential GAN. Images of the modality with a lower complexity are
synthesized first, and the counterparts with a higher complexity are generated
later. Our sequential GAN is trained end-to-end in a semi-supervised manner. In
supervised training, the joint distribution of bi-modality images are learned
from real paired images of the two modalities by explicitly minimizing the
reconstruction losses between the real and synthetic images. To avoid
overfitting limited training images, in unsupervised training, the marginal
distribution of each modality is learned based on unpaired images by minimizing
the Wasserstein distance between the distributions of real and fake images. We
comprehensively evaluate the proposed model using two synthesis tasks based on
three types of evaluate metrics and user studies. Visual and quantitative
results demonstrate the superiority of our method to the state-of-the-art
methods, and reasonable visual quality and clinical significance. Code is made
publicly available at
https://github.com/hustlinyi/Multimodal-Medical-Image-Synthesis.
Related papers
- SemFlow: Binding Semantic Segmentation and Image Synthesis via Rectified Flow [94.90853153808987]
Semantic segmentation and semantic image synthesis are representative tasks in visual perception and generation.
We propose a unified framework (SemFlow) and model them as a pair of reverse problems.
Experiments show that our SemFlow achieves competitive results on semantic segmentation and semantic image synthesis tasks.
arXiv Detail & Related papers (2024-05-30T17:34:40Z) - SynthMix: Mixing up Aligned Synthesis for Medical Cross-Modality Domain
Adaptation [17.10686650166592]
We propose SynthMix, an add-on module with a natural yet effective training policy.
Following the adversarial philosophy of GAN, we designed a mix-up synthesis scheme termed SynthMix.
It coherently mixed up aligned images of real and synthetic samples to stimulate the generation of fine-grained features.
arXiv Detail & Related papers (2023-05-07T01:37:46Z) - Pathology Synthesis of 3D-Consistent Cardiac MR Images using 2D VAEs and
GANs [0.5039813366558306]
We propose a method for generating labeled data for the application of supervised deep-learning (DL) training.
The image synthesis consists of label deformation and label-to-image translation tasks.
We demonstrate that such an approach could provide a solution to diversify and enrich an available database of cardiac MR images.
arXiv Detail & Related papers (2022-09-09T10:17:49Z) - Unsupervised Medical Image Translation with Adversarial Diffusion Models [0.2770822269241974]
Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols.
Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation.
arXiv Detail & Related papers (2022-07-17T15:53:24Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A Deep Learning Generative Model Approach for Image Synthesis of Plant
Leaves [62.997667081978825]
We generate via advanced Deep Learning (DL) techniques artificial leaf images in an automatized way.
We aim to dispose of a source of training samples for AI applications for modern crop management.
arXiv Detail & Related papers (2021-11-05T10:53:35Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Multi-Modality Generative Adversarial Networks with Tumor Consistency
Loss for Brain MR Image Synthesis [30.64847799586407]
We propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously.
The experimental results show that the quality of the synthesized images is better than the one synthesized by the baseline model, pix2pix.
arXiv Detail & Related papers (2020-05-02T21:33:15Z) - Multimodal Image Synthesis with Conditional Implicit Maximum Likelihood
Estimation [54.17177006826262]
We develop a new generic conditional image synthesis method based on Implicit Maximum Likelihood Estimation (IMLE)
We demonstrate improved multimodal image synthesis performance on two tasks, single image super-resolution and image synthesis from scene layouts.
arXiv Detail & Related papers (2020-04-07T03:06:55Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.