Generative Adversarial Networks for Brain Images Synthesis: A Review
- URL: http://arxiv.org/abs/2305.15421v1
- Date: Tue, 16 May 2023 17:28:06 GMT
- Title: Generative Adversarial Networks for Brain Images Synthesis: A Review
- Authors: Firoozeh Shomal Zadeh, Sevda Molani, Maysam Orouskhani, Marziyeh
Rezaei, Mehrzad Shafiei, Hossein Abbasi
- Abstract summary: In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality)
generative adversarial network (GAN) as one of the most popular generative-based deep learning methods.
We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
- Score: 2.609784101826762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In medical imaging, image synthesis is the estimation process of one image
(sequence, modality) from another image (sequence, modality). Since images with
different modalities provide diverse biomarkers and capture various features,
multi-modality imaging is crucial in medicine. While multi-screening is
expensive, costly, and time-consuming to report by radiologists, image
synthesis methods are capable of artificially generating missing modalities.
Deep learning models can automatically capture and extract the high dimensional
features. Especially, generative adversarial network (GAN) as one of the most
popular generative-based deep learning methods, uses convolutional networks as
generators, and estimated images are discriminated as true or false based on a
discriminator network. This review provides brain image synthesis via GANs. We
summarized the recent developments of GANs for cross-modality brain image
synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
Related papers
- Synthetic Brain Images: Bridging the Gap in Brain Mapping With Generative Adversarial Model [0.0]
This work investigates the use of Deep Convolutional Generative Adversarial Networks (DCGAN) for producing high-fidelity and realistic MRI image slices.
While the discriminator network discerns between created and real slices, the generator network learns to synthesise realistic MRI image slices.
The generator refines its capacity to generate slices that closely mimic real MRI data through an adversarial training approach.
arXiv Detail & Related papers (2024-04-11T05:06:51Z) - Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse
Modalities [19.31577453889188]
We develop an approach based on multi-modal metric learning to synthesize images of diverse modalities.
We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs.
arXiv Detail & Related papers (2023-03-17T20:58:55Z) - Subject-Specific Lesion Generation and Pseudo-Healthy Synthesis for
Multiple Sclerosis Brain Images [1.7328025136996081]
We present a novel generative method for modelling the local lesion characteristics.
It can generate synthetic lesions on healthy images and synthesize subject-specific pseudo-healthy images from pathological images.
The proposed method can be used as a data augmentation module to generate synthetic images for training brain image segmentation networks.
arXiv Detail & Related papers (2022-08-03T15:12:55Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - ResViT: Residual vision transformers for multi-modal medical image
synthesis [0.0]
We propose a novel generative adversarial approach for medical image synthesis, ResViT, to combine local precision of convolution operators with contextual sensitivity of vision transformers.
Our results indicate the superiority of ResViT against competing methods in terms of qualitative observations and quantitative metrics.
arXiv Detail & Related papers (2021-06-30T12:57:37Z) - Multimodal Face Synthesis from Visual Attributes [85.87796260802223]
We propose a novel generative adversarial network that simultaneously synthesizes identity preserving multimodal face images.
multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.
arXiv Detail & Related papers (2021-04-09T13:47:23Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.