Pathology Synthesis of 3D-Consistent Cardiac MR Images using 2D VAEs and
GANs
- URL: http://arxiv.org/abs/2209.04223v2
- Date: Tue, 30 May 2023 14:37:11 GMT
- Title: Pathology Synthesis of 3D-Consistent Cardiac MR Images using 2D VAEs and
GANs
- Authors: Sina Amirrajab, Cristian Lorenz, Juergen Weese, Josien Pluim, Marcel
Breeuwer
- Abstract summary: We propose a method for generating labeled data for the application of supervised deep-learning (DL) training.
The image synthesis consists of label deformation and label-to-image translation tasks.
We demonstrate that such an approach could provide a solution to diversify and enrich an available database of cardiac MR images.
- Score: 0.5039813366558306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a method for synthesizing cardiac magnetic resonance (MR) images
with plausible heart pathologies and realistic appearances for the purpose of
generating labeled data for the application of supervised deep-learning (DL)
training. The image synthesis consists of label deformation and label-to-image
translation tasks. The former is achieved via latent space interpolation in a
VAE model, while the latter is accomplished via a label-conditional GAN model.
We devise three approaches for label manipulation in the latent space of the
trained VAE model; i) \textbf{intra-subject synthesis} aiming to interpolate
the intermediate slices of a subject to increase the through-plane resolution,
ii) \textbf{inter-subject synthesis} aiming to interpolate the geometry and
appearance of intermediate images between two dissimilar subjects acquired with
different scanner vendors, and iii) \textbf{pathology synthesis} aiming to
synthesize a series of pseudo-pathological synthetic subjects with
characteristics of a desired heart disease. Furthermore, we propose to model
the relationship between 2D slices in the latent space of the VAE prior to
reconstruction for generating 3D-consistent subjects from stacking up 2D
slice-by-slice generations. We demonstrate that such an approach could provide
a solution to diversify and enrich an available database of cardiac MR images
and to pave the way for the development of generalizable DL-based image
analysis algorithms. We quantitatively evaluate the quality of the synthesized
data in an augmentation scenario to achieve generalization and robustness to
multi-vendor and multi-disease data for image segmentation. Our code is
available at https://github.com/sinaamirrajab/CardiacPathologySynthesis.
Related papers
- cWDM: Conditional Wavelet Diffusion Models for Cross-Modality 3D Medical Image Synthesis [1.767791678320834]
This paper contributes to the "BraTS 2024 Brain MR Image Synthesis Challenge"
It presents a conditional Wavelet Diffusion Model for solving a paired image-to-image translation task on high-resolution volumes.
arXiv Detail & Related papers (2024-11-26T08:17:57Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - A Data Augmentation Pipeline to Generate Synthetic Labeled Datasets of
3D Echocardiography Images using a GAN [6.0419497882916655]
We propose an image generation pipeline to synthesize 3D echocardiographic images with corresponding ground truth labels.
The proposed method utilizes detailed anatomical segmentations of the heart as ground truth label sources.
arXiv Detail & Related papers (2024-03-08T15:26:27Z) - S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential
Generative Adversarial Networks [35.358653509217994]
We propose a bi-modality medical image synthesis approach based on sequential generative adversarial network (GAN) and semi-supervised learning.
Our approach consists of two generative modules that synthesize images of the two modalities in a sequential order.
Visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods.
arXiv Detail & Related papers (2023-08-27T10:39:33Z) - Make-A-Volume: Leveraging Latent Diffusion Models for Cross-Modality 3D
Brain MRI Synthesis [35.45013834475523]
Cross-modality medical image synthesis is a critical topic and has the potential to facilitate numerous applications in the medical imaging field.
Most current medical image synthesis methods rely on generative adversarial networks and suffer from notorious mode collapse and unstable training.
We introduce a new paradigm for volumetric medical data synthesis by leveraging 2D backbones and present a diffusion-based framework, Make-A-Volume.
arXiv Detail & Related papers (2023-07-19T16:01:09Z) - SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ
Histopathology Image Synthesis [63.845552349914186]
We propose a style-guided instance-adaptive normalization (SIAN) to synthesize realistic color distributions and textures for different organs.
The four phases work together and are integrated into a generative network to embed image semantics, style, and instance-level boundaries.
arXiv Detail & Related papers (2022-09-02T16:45:46Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - 4D Semantic Cardiac Magnetic Resonance Image Synthesis on XCAT
Anatomical Model [0.7959841510571622]
We propose a hybrid controllable image generation method to synthesize 3D+t labeled Cardiac Magnetic Resonance (CMR) images.
Our method takes the mechanistic 4D eXtended CArdiac Torso (XCAT) heart model as the anatomical ground truth.
We employ the state-of-the-art SPatially Adaptive De-normalization (SPADE) technique for conditional image synthesis.
arXiv Detail & Related papers (2020-02-17T17:25:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.