seg2med: a segmentation-based medical image generation framework using denoising diffusion probabilistic models
- URL: http://arxiv.org/abs/2504.09182v1
- Date: Sat, 12 Apr 2025 11:32:32 GMT
- Title: seg2med: a segmentation-based medical image generation framework using denoising diffusion probabilistic models
- Authors: Zeyu Yang, Zhilin Chen, Yipeng Sun, Anika Strittmatter, Anish Raj, Ahmad Allababidi, Johann S. Rink, Frank G. Zöllner,
- Abstract summary: seg2med is an advanced medical image synthesis framework.<n>It generates high-quality synthetic medical images conditioned on anatomical masks from TotalSegmentator.
- Score: 5.92914320764123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we present seg2med, an advanced medical image synthesis framework that uses Denoising Diffusion Probabilistic Models (DDPM) to generate high-quality synthetic medical images conditioned on anatomical masks from TotalSegmentator. The framework synthesizes CT and MR images from segmentation masks derived from real patient data and XCAT digital phantoms, achieving a Structural Similarity Index Measure (SSIM) of 0.94 +/- 0.02 for CT and 0.89 +/- 0.04 for MR images compared to ground-truth images of real patients. It also achieves a Feature Similarity Index Measure (FSIM) of 0.78 +/- 0.04 for CT images from XCAT. The generative quality is further supported by a Fr\'echet Inception Distance (FID) of 3.62 for CT image generation. Additionally, seg2med can generate paired CT and MR images with consistent anatomical structures and convert images between CT and MR modalities, achieving SSIM values of 0.91 +/- 0.03 for MR-to-CT and 0.77 +/- 0.04 for CT-to-MR conversion. Despite the limitations of incomplete anatomical details in segmentation masks, the framework shows strong performance in cross-modality synthesis and multimodal imaging. seg2med also demonstrates high anatomical fidelity in CT synthesis, achieving a mean Dice coefficient greater than 0.90 for 11 abdominal organs and greater than 0.80 for 34 organs out of 59 in 58 test cases. The highest Dice of 0.96 +/- 0.01 was recorded for the right scapula. Leveraging the TotalSegmentator toolkit, seg2med enables segmentation mask generation across diverse datasets, supporting applications in clinical imaging, data augmentation, multimodal synthesis, and diagnostic algorithm development.
Related papers
- Multi-Layer Feature Fusion with Cross-Channel Attention-Based U-Net for Kidney Tumor Segmentation [0.0]
U-Net based deep learning techniques are emerging as a promising approach for automated medical image segmentation.
We present an improved U-Net based model for end-to-end automated semantic segmentation of CT scan images to identify renal tumors.
arXiv Detail & Related papers (2024-10-20T19:02:41Z) - Improved 3D Whole Heart Geometry from Sparse CMR Slices [3.701571763780745]
Cardiac magnetic resonance (CMR) imaging and computed tomography (CT) are two common non-invasive imaging methods for assessing patients with cardiovascular disease.
CMR typically acquires multiple sparse 2D slices, with unavoidable respiratory motion artefacts between slices, whereas CT acquires isotropic dense data but uses ionising radiation.
We explore the combination of Slice Shifting Algorithm (SSA), Spatial Transformer Network (STN), and Label Transformer Network (LTN) to: 1) correct respiratory motion between segmented slices, and 2) transform sparse segmentation data into dense segmentation.
arXiv Detail & Related papers (2024-08-14T13:03:48Z) - Deep learning-based brain segmentation model performance validation with clinical radiotherapy CT [0.0]
This study validates the SynthSeg robust brain segmentation model on computed tomography (CT)
Brain segmentations from CT and MRI were obtained with SynthSeg model, a component of the Freesurfer imaging suite.
CT performance is lower than MRI based on the integrated QC scores, but low-quality segmentations can be excluded with QC-based thresholding.
arXiv Detail & Related papers (2024-06-25T09:56:30Z) - TotalSegmentator MRI: Robust Sequence-independent Segmentation of Multiple Anatomic Structures in MRI [59.86827659781022]
A nnU-Net model (TotalSegmentator) was trained on MRI and segment 80atomic structures.<n>Dice scores were calculated between the predicted segmentations and expert reference standard segmentations to evaluate model performance.<n>Open-source, easy-to-use model allows for automatic, robust segmentation of 80 structures.
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.76736949127792]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.<n>The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.<n>The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - MRSegmentator: Multi-Modality Segmentation of 40 Classes in MRI and CT [29.48170108608303]
The model was trained on 1,200 manually annotated 3D axial MRI scans from the UK Biobank, 221 in-house MRI scans, and 1228 CT scans.
It demonstrated high accuracy for well-defined organs (lungs: DSC 0.96, heart: DSC 0.94) and organs with anatomic variability (liver: DSC 0.96, kidneys: DSC 0.95)
It generalized well to CT, achieving DSC mean of 0.84 $pm$ 0.11 on AMOS CT data.
arXiv Detail & Related papers (2024-05-10T13:15:42Z) - Minimally Interactive Segmentation of Soft-Tissue Tumors on CT and MRI
using Deep Learning [0.0]
We develop a minimally interactive deep learning-based segmentation method for soft-tissue tumors (STTs) on CT and MRI.
The method requires the user to click six points near the tumor's extreme boundaries to serve as input for a Convolutional Neural Network.
arXiv Detail & Related papers (2024-02-12T16:15:28Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via
Volumetric Pseudo-Labeling [66.75096111651062]
We created a large-scale dataset of 10,021 thoracic CTs with 157 labels.
We applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels.
Our resulting segmentation models demonstrated remarkable performance on CXR.
arXiv Detail & Related papers (2023-06-06T18:01:08Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Negligible effect of brain MRI data preprocessing for tumor segmentation [36.89606202543839]
We conduct experiments on three publicly available datasets and evaluate the effect of different preprocessing steps in deep neural networks.
Our results demonstrate that most popular standardization steps add no value to the network performance.
We suggest that image intensity normalization approaches do not contribute to model accuracy because of the reduction of signal variance with image standardization.
arXiv Detail & Related papers (2022-04-11T17:29:36Z) - iPhantom: a framework for automated creation of individualized
computational phantoms and its application to CT organ dosimetry [58.943644554192936]
This study aims to develop and validate a novel framework, iPhantom, for automated creation of patient-specific phantoms or digital-twins.
The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients.
iPhantom precisely predicted all organ locations with good accuracy of Dice Similarity Coefficients (DSC) >0.6 for anchor organs and DSC of 0.3-0.9 for all other organs.
arXiv Detail & Related papers (2020-08-20T01:50:49Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.