Synthesizing CTA Image Data for Type-B Aortic Dissection using Stable
Diffusion Models
- URL: http://arxiv.org/abs/2402.06969v1
- Date: Sat, 10 Feb 2024 14:59:37 GMT
- Title: Synthesizing CTA Image Data for Type-B Aortic Dissection using Stable
Diffusion Models
- Authors: Ayman Abaid, Muhammad Ali Farooq, Niamh Hynes, Peter Corcoran, and
Ihsan Ullah
- Abstract summary: Stable Diffusion (SD) has gained a lot of attention in recent years in the field of Generative AI.
It has been shown that Cardiac CTA images can be successfully generated using Text to Image (T2I) stable diffusion model.
- Score: 0.993378200812519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stable Diffusion (SD) has gained a lot of attention in recent years in the
field of Generative AI thus helping in synthesizing medical imaging data with
distinct features. The aim is to contribute to the ongoing effort focused on
overcoming the limitations of data scarcity and improving the capabilities of
ML algorithms for cardiovascular image processing. Therefore, in this study,
the possibility of generating synthetic cardiac CTA images was explored by
fine-tuning stable diffusion models based on user defined text prompts, using
only limited number of CTA images as input. A comprehensive evaluation of the
synthetic data was conducted by incorporating both quantitative analysis and
qualitative assessment, where a clinician assessed the quality of the generated
data. It has been shown that Cardiac CTA images can be successfully generated
using using Text to Image (T2I) stable diffusion model. The results demonstrate
that the tuned T2I CTA diffusion model was able to generate images with
features that are typically unique to acute type B aortic dissection (TBAD)
medical conditions.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - An Organism Starts with a Single Pix-Cell: A Neural Cellular Diffusion for High-Resolution Image Synthesis [8.01395073111961]
We introduce a novel family of models termed Generative Cellular Automata (GeCA)
GeCAs are evaluated as an effective augmentation tool for retinal disease classification across two imaging modalities: Fundus and Optical Coherence Tomography ( OCT)
In the context of OCT imaging, where data is scarce and the distribution of classes is inherently skewed, GeCA significantly boosts the performance of 11 different ophthalmological conditions.
arXiv Detail & Related papers (2024-07-03T11:26:09Z) - Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning [15.234393268111845]
Non-contrast CT (NCCT) imaging may reduce image contrast and anatomical visibility, potentially increasing diagnostic uncertainty.
We propose a novel Syncretic generative model based on the latent diffusion model for medical image translation (S$2$LDM)
S$2$LDM enhances the similarity in distinct modal images via syncretic encoding and diffusing, promoting amalgamated information in the latent space and generating medical images with more details in contrast-enhanced regions.
arXiv Detail & Related papers (2024-06-20T03:54:41Z) - Quantitative Characterization of Retinal Features in Translated OCTA [0.6664270117164767]
This study explores the feasibility of using generative machine learning (ML) to translate Optical Coherence Tomography ( OCT) images into Optical Coherence Tomography Angiography ( OCTA) images.
arXiv Detail & Related papers (2024-04-24T18:40:45Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Medical Diffusion -- Denoising Diffusion Probabilistic Models for 3D
Medical Image Generation [0.6486409713123691]
We show that diffusion probabilistic models can synthesize high quality medical imaging data.
We provide quantitative measurements of their performance through a reader study with two medical experts.
We demonstrate that synthetic images can be used in a self-supervised pre-training and improve the performance of breast segmentation models when data is scarce.
arXiv Detail & Related papers (2022-11-07T08:37:48Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.