Morphology-preserving Autoregressive 3D Generative Modelling of the
Brain
- URL: http://arxiv.org/abs/2209.03177v1
- Date: Wed, 7 Sep 2022 14:17:42 GMT
- Title: Morphology-preserving Autoregressive 3D Generative Modelling of the
Brain
- Authors: Petru-Daniel Tudosiu, Walter Hugo Lopez Pinaya, Mark S. Graham, Pedro
Borges, Virginia Fernandez, Dai Yang, Jeremy Appleyard, Guido Novati, Disha
Mehra, Mike Vella, Parashkev Nachev, Sebastien Ourselin and Jorge Cardoso
- Abstract summary: This work proposes a generative model that can be scaled to produce correct, high-resolution, and realistic images of the human brain.
The ability to generate a potentially unlimited amount of data not only enables large-scale studies of human anatomy and pathology without jeopardizing patient privacy, but also significantly advances research in the field of anomaly detection, modality synthesis, learning under limited data, and fair and ethical AI.
- Score: 2.6498965891119397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human anatomy, morphology, and associated diseases can be studied using
medical imaging data. However, access to medical imaging data is restricted by
governance and privacy concerns, data ownership, and the cost of acquisition,
thus limiting our ability to understand the human body. A possible solution to
this issue is the creation of a model able to learn and then generate synthetic
images of the human body conditioned on specific characteristics of relevance
(e.g., age, sex, and disease status). Deep generative models, in the form of
neural networks, have been recently used to create synthetic 2D images of
natural scenes. Still, the ability to produce high-resolution 3D volumetric
imaging data with correct anatomical morphology has been hampered by data
scarcity and algorithmic and computational limitations. This work proposes a
generative model that can be scaled to produce anatomically correct,
high-resolution, and realistic images of the human brain, with the necessary
quality to allow further downstream analyses. The ability to generate a
potentially unlimited amount of data not only enables large-scale studies of
human anatomy and pathology without jeopardizing patient privacy, but also
significantly advances research in the field of anomaly detection, modality
synthesis, learning under limited data, and fair and ethical AI. Code and
trained models are available at: https://github.com/AmigoLab/SynthAnatomy.
Related papers
- HINT: Learning Complete Human Neural Representations from Limited Viewpoints [69.76947323932107]
We propose a NeRF-based algorithm able to learn a detailed and complete human model from limited viewing angles.
As a result, our method can reconstruct complete humans even from a few viewing angles, increasing performance by more than 15% PSNR.
arXiv Detail & Related papers (2024-05-30T05:43:09Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - A 3D generative model of pathological multi-modal MR images and
segmentations [3.4806591877889375]
We propose brainSPADE3D, a 3D generative model for brain MRI and associated segmentations.
The proposed joint imaging-segmentation generative model is shown to generate high-fidelity synthetic images and associated segmentations.
We demonstrate how the model can alleviate issues with segmentation model performance when unexpected pathologies are present in the data.
arXiv Detail & Related papers (2023-11-08T09:36:37Z) - HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion [114.15397904945185]
We propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts.
Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network.
Our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.
arXiv Detail & Related papers (2023-10-12T17:59:34Z) - SynBody: Synthetic Dataset with Layered Human Models for 3D Human
Perception and Modeling [93.60731530276911]
We introduce a new synthetic dataset, SynBody, with three appealing features.
The dataset comprises 1.2M images with corresponding accurate 3D annotations, covering 10,000 human body models, 1,187 actions, and various viewpoints.
arXiv Detail & Related papers (2023-03-30T13:30:12Z) - Brain Tumor Synthetic Data Generation with Adaptive StyleGANs [6.244557340851846]
We present a method to generate brain tumor MRI images using generative adversarial networks.
Results demonstrate that the proposed method can learn the distributions of brain tumors.
The approach can addresses the limited data availability by generating realistic-looking brain MRI with tumors.
arXiv Detail & Related papers (2022-12-04T09:01:33Z) - Can segmentation models be trained with fully synthetically generated
data? [0.39577682622066246]
BrainSPADE is a model which combines a synthetic diffusion-based label generator with a semantic image generator.
Our model can produce fully synthetic brain labels on-demand, with or without pathology of interest, and then generate a corresponding MRI image of an arbitrary guided style.
Experiments show that brainSPADE synthetic data can be used to train segmentation models with performance comparable to that of models trained on real data.
arXiv Detail & Related papers (2022-09-17T05:24:04Z) - Brain Imaging Generation with Latent Diffusion Models [2.200720122706913]
In this study, we explore using Latent Diffusion Models to generate synthetic images from high-resolution 3D brain images.
We found that our models created realistic data, and we could use the conditioning variables to control the data generation effectively.
arXiv Detail & Related papers (2022-09-15T09:16:21Z) - SyntheX: Scaling Up Learning-based X-ray Image Analysis Through In
Silico Experiments [12.019996672009375]
We show that creating realistic simulated images from human models is a viable alternative to large-scale in situ data collection.
Because synthetic generation of training data from human-based models scales easily, we find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real data-trained models.
arXiv Detail & Related papers (2022-06-13T13:08:41Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.