Synthesizing Diabetic Foot Ulcer Images with Diffusion Model
- URL: http://arxiv.org/abs/2310.20140v1
- Date: Tue, 31 Oct 2023 03:15:30 GMT
- Title: Synthesizing Diabetic Foot Ulcer Images with Diffusion Model
- Authors: Reza Basiri, Karim Manji, Francois Harton, Alisha Poonja, Milos R.
Popovic, Shehroz S. Khan
- Abstract summary: Diabetic Foot Ulcer (DFU) is a serious skin wound requiring specialized care.
In recent years, generative adversarial networks and diffusion models have emerged as powerful tools for generating synthetic images.
This paper explores the potential of diffusion models for synthesizing DFU images and evaluates their authenticity through expert clinician assessments.
- Score: 1.8699569122464073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diabetic Foot Ulcer (DFU) is a serious skin wound requiring specialized care.
However, real DFU datasets are limited, hindering clinical training and
research activities. In recent years, generative adversarial networks and
diffusion models have emerged as powerful tools for generating synthetic images
with remarkable realism and diversity in many applications. This paper explores
the potential of diffusion models for synthesizing DFU images and evaluates
their authenticity through expert clinician assessments. Additionally,
evaluation metrics such as Frechet Inception Distance (FID) and Kernel
Inception Distance (KID) are examined to assess the quality of the synthetic
DFU images. A dataset of 2,000 DFU images is used for training the diffusion
model, and the synthetic images are generated by applying diffusion processes.
The results indicate that the diffusion model successfully synthesizes visually
indistinguishable DFU images. 70% of the time, clinicians marked synthetic DFU
images as real DFUs. However, clinicians demonstrate higher unanimous
confidence in rating real images than synthetic ones. The study also reveals
that FID and KID metrics do not significantly align with clinicians'
assessments, suggesting alternative evaluation approaches are needed. The
findings highlight the potential of diffusion models for generating synthetic
DFU images and their impact on medical training programs and research in wound
detection and classification.
Related papers
- Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Diffusion Facial Forgery Detection [56.69763252655695]
This paper introduces DiFF, a comprehensive dataset dedicated to face-focused diffusion-generated images.
We conduct extensive experiments on the DiFF dataset via a human test and several representative forgery detection methods.
The results demonstrate that the binary detection accuracy of both human observers and automated detectors often falls below 30%.
arXiv Detail & Related papers (2024-01-29T03:20:19Z) - On the notion of Hallucinations from the lens of Bias and Validity in
Synthetic CXR Images [0.35998666903987897]
Generative models, such as diffusion models, aim to mitigate data quality and clinical information disparities.
At Stanford, researchers explored the utility of a fine-tuned Stable Diffusion model (RoentGen) for medical imaging data augmentation.
We leveraged RoentGen to produce synthetic Chest-XRay (CXR) images and conducted assessments on bias, validity, and hallucinations.
arXiv Detail & Related papers (2023-12-12T04:41:20Z) - Improving Nonalcoholic Fatty Liver Disease Classification Performance
With Latent Diffusion Models [0.0]
We show that by combining synthetic images, generated using diffusion models, with real images, we can enhance nonalcoholic fatty liver disease classification performance.
Our results show superior performance for the diffusion-generated images, with a maximum IS score of $1.90$ compared to $1.67$ for GANs, and a minimum FID score of $69.45$ compared to $100.05$ for GANs.
arXiv Detail & Related papers (2023-07-13T01:14:08Z) - Venn Diagram Multi-label Class Interpretation of Diabetic Foot Ulcer
with Color and Sharpness Enhancement [8.16095457838169]
DFU is a severe complication of diabetes that can lead to amputation of the lower limb if not treated properly.
We propose a Venn Diagram interpretation of multi-label CNN-based method, utilizing different image enhancement strategies, to improve the multi-class DFU classification.
Our proposed approach outperforms existing approaches and achieves Macro-Average F1, Recall and Precision scores of 0.6592, 0.6593, and 0.6652, respectively.
arXiv Detail & Related papers (2023-05-01T19:06:28Z) - DiffMIC: Dual-Guidance Diffusion Network for Medical Image
Classification [32.67098520984195]
We propose the first diffusion-based model (named DiffMIC) to address general medical image classification.
Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-03-19T09:15:45Z) - DIRE for Diffusion-Generated Image Detection [128.95822613047298]
We propose a novel representation called DIffusion Reconstruction Error (DIRE)
DIRE measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model.
It provides a hint that DIRE can serve as a bridge to distinguish generated and real images.
arXiv Detail & Related papers (2023-03-16T13:15:03Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Synergic Adversarial Label Learning for Grading Retinal Diseases via
Knowledge Distillation and Multi-task Learning [29.46896757506273]
Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases.
Some studies show that AMD and DR share some common features like hemorrhagic points and exudation but most classification algorithms only train those disease models independently.
We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner.
arXiv Detail & Related papers (2020-03-24T01:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.