Brain Imaging Generation with Latent Diffusion Models
- URL: http://arxiv.org/abs/2209.07162v1
- Date: Thu, 15 Sep 2022 09:16:21 GMT
- Title: Brain Imaging Generation with Latent Diffusion Models
- Authors: Walter H. L. Pinaya, Petru-Daniel Tudosiu, Jessica Dafflon, Pedro F da
Costa, Virginia Fernandez, Parashkev Nachev, Sebastien Ourselin, M. Jorge
Cardoso
- Abstract summary: In this study, we explore using Latent Diffusion Models to generate synthetic images from high-resolution 3D brain images.
We found that our models created realistic data, and we could use the conditioning variables to control the data generation effectively.
- Score: 2.200720122706913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have brought remarkable breakthroughs in medical image
analysis. However, due to their data-hungry nature, the modest dataset sizes in
medical imaging projects might be hindering their full potential. Generating
synthetic data provides a promising alternative, allowing to complement
training datasets and conducting medical image research at a larger scale.
Diffusion models recently have caught the attention of the computer vision
community by producing photorealistic synthetic images. In this study, we
explore using Latent Diffusion Models to generate synthetic images from
high-resolution 3D brain images. We used T1w MRI images from the UK Biobank
dataset (N=31,740) to train our models to learn about the probabilistic
distribution of brain images, conditioned on covariables, such as age, sex, and
brain structure volumes. We found that our models created realistic data, and
we could use the conditioning variables to control the data generation
effectively. Besides that, we created a synthetic dataset with 100,000 brain
images and made it openly available to the scientific community.
Related papers
- Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Augmenting medical image classifiers with synthetic data from latent
diffusion models [12.077733447347592]
We show that latent diffusion models can scalably generate images of skin disease.
We generate and analyze a new dataset of 458,920 synthetic images produced using several generation strategies.
arXiv Detail & Related papers (2023-08-23T22:34:49Z) - Brain Tumor Synthetic Data Generation with Adaptive StyleGANs [6.244557340851846]
We present a method to generate brain tumor MRI images using generative adversarial networks.
Results demonstrate that the proposed method can learn the distributions of brain tumors.
The approach can addresses the limited data availability by generating realistic-looking brain MRI with tumors.
arXiv Detail & Related papers (2022-12-04T09:01:33Z) - Medical Diffusion -- Denoising Diffusion Probabilistic Models for 3D
Medical Image Generation [0.6486409713123691]
We show that diffusion probabilistic models can synthesize high quality medical imaging data.
We provide quantitative measurements of their performance through a reader study with two medical experts.
We demonstrate that synthetic images can be used in a self-supervised pre-training and improve the performance of breast segmentation models when data is scarce.
arXiv Detail & Related papers (2022-11-07T08:37:48Z) - Spot the fake lungs: Generating Synthetic Medical Images using Neural
Diffusion Models [1.0957528713294873]
We use a pre-trained DALLE2 model to generate lungs X-Ray and CT images from an input text prompt.
We train a stable diffusion model with 3165 X-Ray images and generate synthetic images.
Results demonstrate that images generated with the diffusion model can translate characteristics that are otherwise very specific to certain medical conditions.
arXiv Detail & Related papers (2022-11-02T06:02:55Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Morphology-preserving Autoregressive 3D Generative Modelling of the
Brain [2.6498965891119397]
This work proposes a generative model that can be scaled to produce correct, high-resolution, and realistic images of the human brain.
The ability to generate a potentially unlimited amount of data not only enables large-scale studies of human anatomy and pathology without jeopardizing patient privacy, but also significantly advances research in the field of anomaly detection, modality synthesis, learning under limited data, and fair and ethical AI.
arXiv Detail & Related papers (2022-09-07T14:17:42Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.