Deep Generative Model-Based Generation of Synthetic Individual-Specific Brain MRI Segmentations
- URL: http://arxiv.org/abs/2504.12352v2
- Date: Wed, 23 Apr 2025 18:53:03 GMT
- Title: Deep Generative Model-Based Generation of Synthetic Individual-Specific Brain MRI Segmentations
- Authors: Ruijie Wang, Luca Rossetto, Susan Mérillat, Christina Röcke, Mike Martin, Abraham Bernstein,
- Abstract summary: We propose the first approach capable of generating synthetic brain MRI segmentations for individuals.<n>Our approach features a novel deep generative model, CSeg Synth, which outperforms existing prominent generative models.<n>In assessing the effectiveness of the individual-specific generation, we achieve superior volume prediction, with mean absolute errors of only 36.44mL, 29.20mL, and 35.51mL.
- Score: 6.66216112298345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To the best of our knowledge, all existing methods that can generate synthetic brain magnetic resonance imaging (MRI) scans for a specific individual require detailed structural or volumetric information about the individual's brain. However, such brain information is often scarce, expensive, and difficult to obtain. In this paper, we propose the first approach capable of generating synthetic brain MRI segmentations -- specifically, 3D white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) segmentations -- for individuals using their easily obtainable and often readily available demographic, interview, and cognitive test information. Our approach features a novel deep generative model, CSegSynth, which outperforms existing prominent generative models, including conditional variational autoencoder (C-VAE), conditional generative adversarial network (C-GAN), and conditional latent diffusion model (C-LDM). We demonstrate the high quality of our synthetic segmentations through extensive evaluations. Also, in assessing the effectiveness of the individual-specific generation, we achieve superior volume prediction, with mean absolute errors of only 36.44mL, 29.20mL, and 35.51mL between the ground-truth WM, GM, and CSF volumes of test individuals and those volumes predicted based on generated individual-specific segmentations, respectively.
Related papers
- Generative Latent Representations of 3D Brain MRI for Multi-Task Downstream Analysis in Down Syndrome [3.344873290507966]
We develop variational autoencoders to encode 3D brain MRI scans into compact latent space representations for generative and predictive applications.<n>Our results demonstrate that the VAE successfully captures essential brain features while maintaining high reconstruction fidelity.
arXiv Detail & Related papers (2026-02-14T11:50:57Z) - From Healthy Scans to Annotated Tumors: A Tumor Fabrication Framework for 3D Brain MRI Synthesis [3.295857224165814]
Tumor Fabrication (TF) is a novel two-stage framework for unpaired 3D brain tumor synthesis.<n>TF is fully automated and leverages only healthy image scans along with a limited amount of real annotated data.<n>We demonstrate that our synthetic image-label pairs used as data enrichment can significantly improve performance on downstream tumor segmentation tasks in low-data regimes.
arXiv Detail & Related papers (2025-11-23T23:28:49Z) - Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond) [90.45301024940329]
Language models (LMs) often struggle to generate diverse, human-like creative content.<n>We introduce Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries.<n>We present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect.
arXiv Detail & Related papers (2025-10-27T03:16:21Z) - Building a General SimCLR Self-Supervised Foundation Model Across Neurological Diseases to Advance 3D Brain MRI Diagnoses [2.4836875944302634]
We present a general, high-resolution SimCLR-based SSL foundation model for 3D brain structural MRI.<n>Our model still achieves superior performance when fine-tuned using only 20% of labeled training samples for predicting Alzheimer's disease.
arXiv Detail & Related papers (2025-09-12T18:05:08Z) - Synthesizing Individualized Aging Brains in Health and Disease with Generative Models and Parallel Transport [3.43699245553078]
We introduce InBrainSyn, a framework for high-resolution subject-specific longitudinal MRI scans that simulate Alzheimer's disease (AD) and normal aging.<n>InBrainSyn uses a parallel transport algorithm to adapt the population-level aging trajectories learned by a generative deep template network.<n>Overall, with only a single baseline scan, InBrainSyn synthesizes realistic 3Dtemporal T1w MRI scans, producing personalized longitudinal aging trajectories.
arXiv Detail & Related papers (2025-02-28T13:45:09Z) - Ensemble Learning and 3D Pix2Pix for Comprehensive Brain Tumor Analysis in Multimodal MRI [2.104687387907779]
This study presents an integrated approach leveraging the strengths of ensemble learning with hybrid transformer models and convolutional neural networks (CNNs)
Our methodology combines robust tumor segmentation capabilities, utilizing axial attention and transformer encoders for enhanced spatial relationship modeling.
The results demonstrate outstanding performance, evidenced by quantitative evaluations such as the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD95) for segmentation, and Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Mean-Square Error (MSE) for inpainting.
arXiv Detail & Related papers (2024-12-16T15:10:53Z) - MRGen: Segmentation Data Engine For Underrepresented MRI Modalities [59.61465292965639]
Training medical image segmentation models for rare yet clinically significant imaging modalities is challenging due to the scarcity of annotated data.
This paper investigates leveraging generative models to synthesize training data, to train segmentation models for underrepresented modalities.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - Maximizing domain generalization in fetal brain tissue segmentation: the role of synthetic data generation, intensity clustering and real image fine-tuning [1.1443262816483672]
Recent approaches based on domain randomization, like SynthSeg, have shown a great potential for single source domain generalization.
We show how to maximize the out-of-domain (OOD) generalization potential of SynthSeg-based methods in fetal brain MRI.
arXiv Detail & Related papers (2024-11-11T10:17:44Z) - Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [60.983327742457995]
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface.
We devise Psychometry, an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects.
arXiv Detail & Related papers (2024-03-29T07:16:34Z) - Empowering Healthcare through Privacy-Preserving MRI Analysis [3.6394715554048234]
We introduce the Ensemble-Based Federated Learning (EBFL) Framework.
EBFL framework deviates from the conventional approach by emphasizing model features over sharing sensitive patient data.
We have achieved remarkable precision in the classification of brain tumors, including glioma, meningioma, pituitary, and non-tumor instances.
arXiv Detail & Related papers (2024-03-14T19:51:18Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Automated Ensemble-Based Segmentation of Adult Brain Tumors: A Novel
Approach Using the BraTS AFRICA Challenge Data [0.0]
We introduce an ensemble method that comprises eleven unique variations based on three core architectures.
Our findings reveal that the ensemble approach, combining different architectures, outperforms single models.
These results underline the potential of tailored deep learning techniques in precisely segmenting brain tumors.
arXiv Detail & Related papers (2023-08-14T15:34:22Z) - Brain Tumor Synthetic Data Generation with Adaptive StyleGANs [6.244557340851846]
We present a method to generate brain tumor MRI images using generative adversarial networks.
Results demonstrate that the proposed method can learn the distributions of brain tumors.
The approach can addresses the limited data availability by generating realistic-looking brain MRI with tumors.
arXiv Detail & Related papers (2022-12-04T09:01:33Z) - SMU-Net: Style matching U-Net for brain tumor segmentation with missing
modalities [4.855689194518905]
We propose a style matching U-Net (SMU-Net) for brain tumour segmentation on MRI images.
Our co-training approach utilizes a content and style-matching mechanism to distill the informative features from the full-modality network into a missing modality network.
Our style matching module adaptively recalibrates the representation space by learning a matching function to transfer the informative and textural features from a full-modality path into a missing-modality path.
arXiv Detail & Related papers (2022-04-06T17:55:19Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.