A 3D generative model of pathological multi-modal MR images and
segmentations
- URL: http://arxiv.org/abs/2311.04552v1
- Date: Wed, 8 Nov 2023 09:36:37 GMT
- Title: A 3D generative model of pathological multi-modal MR images and
segmentations
- Authors: Virginia Fernandez, Walter Hugo Lopez Pinaya, Pedro Borges, Mark S.
Graham, Tom Vercauteren, M. Jorge Cardoso
- Abstract summary: We propose brainSPADE3D, a 3D generative model for brain MRI and associated segmentations.
The proposed joint imaging-segmentation generative model is shown to generate high-fidelity synthetic images and associated segmentations.
We demonstrate how the model can alleviate issues with segmentation model performance when unexpected pathologies are present in the data.
- Score: 3.4806591877889375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative modelling and synthetic data can be a surrogate for real medical
imaging datasets, whose scarcity and difficulty to share can be a nuisance when
delivering accurate deep learning models for healthcare applications. In recent
years, there has been an increased interest in using these models for data
augmentation and synthetic data sharing, using architectures such as generative
adversarial networks (GANs) or diffusion models (DMs). Nonetheless, the
application of synthetic data to tasks such as 3D magnetic resonance imaging
(MRI) segmentation remains limited due to the lack of labels associated with
the generated images. Moreover, many of the proposed generative MRI models lack
the ability to generate arbitrary modalities due to the absence of explicit
contrast conditioning. These limitations prevent the user from adjusting the
contrast and content of the images and obtaining more generalisable data for
training task-specific models. In this work, we propose brainSPADE3D, a 3D
generative model for brain MRI and associated segmentations, where the user can
condition on specific pathological phenotypes and contrasts. The proposed joint
imaging-segmentation generative model is shown to generate high-fidelity
synthetic images and associated segmentations, with the ability to combine
pathologies. We demonstrate how the model can alleviate issues with
segmentation model performance when unexpected pathologies are present in the
data.
Related papers
- Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - MRGen: Diffusion-based Controllable Data Engine for MRI Segmentation towards Unannotated Modalities [59.61465292965639]
This paper investigates a new paradigm for leveraging generative models in medical applications.
We propose a diffusion-based data engine, termed MRGen, which enables generation conditioned on text prompts and masks.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - 3D MRI Synthesis with Slice-Based Latent Diffusion Models: Improving Tumor Segmentation Tasks in Data-Scarce Regimes [2.8498944632323755]
We propose a novel slice-based latent diffusion architecture to address the complexities of volumetric data generation.
This approach extends the joint distribution modeling of medical images and their associated masks, allowing a simultaneous generation of both under data-scarce regimes.
Our architecture can be conditioned by tumor characteristics, including size, shape, and relative position, thereby providing a diverse range of tumor variations.
arXiv Detail & Related papers (2024-06-08T09:53:45Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Multitask Brain Tumor Inpainting with Diffusion Models: A Methodological
Report [0.0]
Inpainting algorithms are a subset of DL generative models that can alter one or more regions of an input image.
The performance of these algorithms is frequently suboptimal due to their limited output variety.
Denoising diffusion probabilistic models (DDPMs) are a recently introduced family of generative networks that can generate results of comparable quality to GANs.
arXiv Detail & Related papers (2022-10-21T17:13:14Z) - Can segmentation models be trained with fully synthetically generated
data? [0.39577682622066246]
BrainSPADE is a model which combines a synthetic diffusion-based label generator with a semantic image generator.
Our model can produce fully synthetic brain labels on-demand, with or without pathology of interest, and then generate a corresponding MRI image of an arbitrary guided style.
Experiments show that brainSPADE synthetic data can be used to train segmentation models with performance comparable to that of models trained on real data.
arXiv Detail & Related papers (2022-09-17T05:24:04Z) - Fast Unsupervised Brain Anomaly Detection and Segmentation with
Diffusion Models [1.6352599467675781]
We propose a method based on diffusion models to detect and segment anomalies in brain imaging.
Our diffusion models achieve competitive performance compared with autoregressive approaches across a series of experiments with 2D CT and MRI data.
arXiv Detail & Related papers (2022-06-07T17:30:43Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.