Synthesizing CTA Image Data for Type-B Aortic Dissection using Stable
Diffusion Models
- URL: http://arxiv.org/abs/2402.06969v1
- Date: Sat, 10 Feb 2024 14:59:37 GMT
- Title: Synthesizing CTA Image Data for Type-B Aortic Dissection using Stable
Diffusion Models
- Authors: Ayman Abaid, Muhammad Ali Farooq, Niamh Hynes, Peter Corcoran, and
Ihsan Ullah
- Abstract summary: Stable Diffusion (SD) has gained a lot of attention in recent years in the field of Generative AI.
It has been shown that Cardiac CTA images can be successfully generated using Text to Image (T2I) stable diffusion model.
- Score: 0.993378200812519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stable Diffusion (SD) has gained a lot of attention in recent years in the
field of Generative AI thus helping in synthesizing medical imaging data with
distinct features. The aim is to contribute to the ongoing effort focused on
overcoming the limitations of data scarcity and improving the capabilities of
ML algorithms for cardiovascular image processing. Therefore, in this study,
the possibility of generating synthetic cardiac CTA images was explored by
fine-tuning stable diffusion models based on user defined text prompts, using
only limited number of CTA images as input. A comprehensive evaluation of the
synthetic data was conducted by incorporating both quantitative analysis and
qualitative assessment, where a clinician assessed the quality of the generated
data. It has been shown that Cardiac CTA images can be successfully generated
using using Text to Image (T2I) stable diffusion model. The results demonstrate
that the tuned T2I CTA diffusion model was able to generate images with
features that are typically unique to acute type B aortic dissection (TBAD)
medical conditions.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Longitudinal Causal Image Synthesis [19.07839779249869]
Clinical decision-making relies heavily on causal reasoning and longitudinal analysis.
How will the brain grey matter atrophy in a year if intervened on the A-beta level in cerebrospinal fluid?
arXiv Detail & Related papers (2024-10-23T09:13:11Z) - An Organism Starts with a Single Pix-Cell: A Neural Cellular Diffusion for High-Resolution Image Synthesis [8.01395073111961]
We introduce a novel family of models termed Generative Cellular Automata (GeCA)
GeCAs are evaluated as an effective augmentation tool for retinal disease classification across two imaging modalities: Fundus and Optical Coherence Tomography ( OCT)
In the context of OCT imaging, where data is scarce and the distribution of classes is inherently skewed, GeCA significantly boosts the performance of 11 different ophthalmological conditions.
arXiv Detail & Related papers (2024-07-03T11:26:09Z) - Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning [15.234393268111845]
Non-contrast CT (NCCT) imaging may reduce image contrast and anatomical visibility, potentially increasing diagnostic uncertainty.
We propose a novel Syncretic generative model based on the latent diffusion model for medical image translation (S$2$LDM)
S$2$LDM enhances the similarity in distinct modal images via syncretic encoding and diffusing, promoting amalgamated information in the latent space and generating medical images with more details in contrast-enhanced regions.
arXiv Detail & Related papers (2024-06-20T03:54:41Z) - Quantitative Characterization of Retinal Features in Translated OCTA [0.6664270117164767]
This study explores the feasibility of using generative machine learning (ML) to translate Optical Coherence Tomography ( OCT) images into Optical Coherence Tomography Angiography ( OCTA) images.
arXiv Detail & Related papers (2024-04-24T18:40:45Z) - Medical Diffusion -- Denoising Diffusion Probabilistic Models for 3D
Medical Image Generation [0.6486409713123691]
We show that diffusion probabilistic models can synthesize high quality medical imaging data.
We provide quantitative measurements of their performance through a reader study with two medical experts.
We demonstrate that synthetic images can be used in a self-supervised pre-training and improve the performance of breast segmentation models when data is scarce.
arXiv Detail & Related papers (2022-11-07T08:37:48Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.