Anatomy-Preserving Latent Diffusion for Generation of Brain Segmentation Masks with Ischemic Infarct
- URL: http://arxiv.org/abs/2602.10167v1
- Date: Tue, 10 Feb 2026 12:50:54 GMT
- Title: Anatomy-Preserving Latent Diffusion for Generation of Brain Segmentation Masks with Ischemic Infarct
- Authors: Lucia Borrego, Vajira Thambawita, Marco Ciuffreda, Ines del Val, Alejandro Dominguez, Josep Munuera,
- Abstract summary: We propose an anatomy-preserving generative framework for the unconditional synthesis of brain segmentation masks.<n>The proposed approach combines a variational autoencoder trained exclusively on segmentation masks to learn an anatomical latent representation.<n>Results show that the generated masks preserve global brain anatomy, discrete tissue semantics, and realistic variability.
- Score: 35.66445631291783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The scarcity of high-quality segmentation masks remains a major bottleneck for medical image analysis, particularly in non-contrast CT (NCCT) neuroimaging, where manual annotation is costly and variable. To address this limitation, we propose an anatomy-preserving generative framework for the unconditional synthesis of multi-class brain segmentation masks, including ischemic infarcts. The proposed approach combines a variational autoencoder trained exclusively on segmentation masks to learn an anatomical latent representation, with a diffusion model operating in this latent space to generate new samples from pure noise. At inference, synthetic masks are obtained by decoding denoised latent vectors through the frozen VAE decoder, with optional coarse control over lesion presence via a binary prompt. Qualitative results show that the generated masks preserve global brain anatomy, discrete tissue semantics, and realistic variability, while avoiding the structural artifacts commonly observed in pixel-space generative models. Overall, the proposed framework offers a simple and scalable solution for anatomy-aware mask generation in data-scarce medical imaging scenarios.
Related papers
- Diffusion Model in Latent Space for Medical Image Segmentation Task [0.0]
MedSegLatDiff is a diffusion based framework that combines a variational autoencoder (VAE) with a latent diffusion model for efficient medical image segmentation.<n>It achieves state of the art or highly competitive Dice and IoU scores while simultaneously generating diverse segmentation hypotheses and confidence maps.
arXiv Detail & Related papers (2025-12-01T05:26:43Z) - Joint Holistic and Lesion Controllable Mammogram Synthesis via Gated Conditional Diffusion Model [12.360775476995169]
Gated Conditional Diffusion Model (GCDM) is a novel framework designed to jointly synthesize holistic mammogram images and localized lesions.<n>GCDM achieves precise control over small lesion areas while enhancing the realism and diversity of synthesized mammograms.
arXiv Detail & Related papers (2025-07-25T12:10:45Z) - Advancing Generalizable Tumor Segmentation with Anomaly-Aware Open-Vocabulary Attention Maps and Frozen Foundation Diffusion Models [11.774375458215193]
Generalizable Tumor aims to train a single model for zero-shot tumor segmentation across diverse anatomical regions.<n>DiffuGTS creates anomaly-aware open-vocabulary attention maps based on text prompts.<n>Experiments on four datasets and seven tumor categories demonstrate the superior performance of our method.
arXiv Detail & Related papers (2025-05-05T16:05:37Z) - PathSegDiff: Pathology Segmentation using Diffusion model representations [63.20694440934692]
We propose PathSegDiff, a novel approach for histopathology image segmentation that leverages Latent Diffusion Models (LDMs) as pre-trained featured extractors.<n>Our method utilizes a pathology-specific LDM, guided by a self-supervised encoder, to extract rich semantic information from H&E stained histopathology images.<n>Our experiments demonstrate significant improvements over traditional methods on the BCSS and GlaS datasets.
arXiv Detail & Related papers (2025-04-09T14:58:21Z) - SimGen: A Diffusion-Based Framework for Simultaneous Surgical Image and Segmentation Mask Generation [1.9393128408121891]
generative AI models like text-to-image can alleviate data scarcity, incorporating spatial annotations, such as segmentation masks, is crucial for precision-driven surgical applications, simulation, and education.<n>This study introduces both a novel task and method, SimGen, for Simultaneous Image and Mask Generation.<n>SimGen is a diffusion model based on the DDPM framework and Residual U-Net, designed to jointly generate high-fidelity surgical images and their corresponding segmentation masks.
arXiv Detail & Related papers (2025-01-15T18:48:38Z) - Enhanced MRI Representation via Cross-series Masking [48.09478307927716]
Cross-Series Masking (CSM) Strategy for effectively learning MRI representation in a self-supervised manner.<n>Method achieves state-of-the-art performance on both public and in-house datasets.
arXiv Detail & Related papers (2024-12-10T10:32:09Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models [11.835841459200632]
We propose a diffusion model-based method that supports anatomically-controllable medical image generation.
We additionally introduce a random mask ablation training algorithm to enable conditioning on a selected combination of anatomical constraints.
SegGuidedDiff reaches a new state-of-the-art in the faithfulness of generated images to input anatomical masks.
arXiv Detail & Related papers (2024-02-07T19:35:09Z) - Diffusion Models for Counterfactual Generation and Anomaly Detection in Brain Images [39.94162291765236]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.