Synthesising Rare Cataract Surgery Samples with Guided Diffusion Models
- URL: http://arxiv.org/abs/2308.02587v1
- Date: Thu, 3 Aug 2023 18:09:26 GMT
- Title: Synthesising Rare Cataract Surgery Samples with Guided Diffusion Models
- Authors: Yannik Frisch, Moritz Fuchs, Antoine Sanner, Felix Anton Ucar, Marius
Frenzel, Joana Wasielica-Poslednik, Adrian Gericke, Felix Mathias Wagner,
Thomas Dratsch, Anirban Mukhopadhyay
- Abstract summary: We analyse cataract surgery video data for the worst-performing phases of a pre-trained tool.
Our model can synthesise diverse, high-quality examples based on complex multi-class multi-label conditions.
Our synthetically extended data can improve the data sparsity problem for the downstream task of tool classification.
- Score: 0.7577401420358975
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Cataract surgery is a frequently performed procedure that demands automation
and advanced assistance systems. However, gathering and annotating data for
training such systems is resource intensive. The publicly available data also
comprises severe imbalances inherent to the surgical process. Motivated by
this, we analyse cataract surgery video data for the worst-performing phases of
a pre-trained downstream tool classifier. The analysis demonstrates that
imbalances deteriorate the classifier's performance on underrepresented cases.
To address this challenge, we utilise a conditional generative model based on
Denoising Diffusion Implicit Models (DDIM) and Classifier-Free Guidance (CFG).
Our model can synthesise diverse, high-quality examples based on complex
multi-class multi-label conditions, such as surgical phases and combinations of
surgical tools. We affirm that the synthesised samples display tools that the
classifier recognises. These samples are hard to differentiate from real
images, even for clinical experts with more than five years of experience.
Further, our synthetically extended data can improve the data sparsity problem
for the downstream task of tool classification. The evaluations demonstrate
that the model can generate valuable unseen examples, allowing the tool
classifier to improve by up to 10% for rare cases. Overall, our approach can
facilitate the development of automated assistance systems for cataract surgery
by providing a reliable source of realistic synthetic data, which we make
available for everyone.
Related papers
- SurgicaL-CD: Generating Surgical Images via Unpaired Image Translation with Latent Consistency Diffusion Models [1.6189876649941652]
We introduce emphSurgicaL-CD, a consistency-distilled diffusion method to generate realistic surgical images.
Our results demonstrate that our method outperforms GANs and diffusion-based approaches.
arXiv Detail & Related papers (2024-08-19T09:19:25Z) - Synthetic Image Learning: Preserving Performance and Preventing Membership Inference Attacks [5.0243930429558885]
This paper introduces Knowledge Recycling (KR), a pipeline designed to optimise the generation and use of synthetic data for training downstream classifiers.
At the heart of this pipeline is Generative Knowledge Distillation (GKD), the proposed technique that significantly improves the quality and usefulness of the information.
The results show a significant reduction in the performance gap between models trained on real and synthetic data, with models based on synthetic data outperforming those trained on real data in some cases.
arXiv Detail & Related papers (2024-07-22T10:31:07Z) - TSynD: Targeted Synthetic Data Generation for Enhanced Medical Image Classification [0.011037620731410175]
This work aims to guide the generative model to synthesize data with high uncertainty.
We alter the feature space of the autoencoder through an optimization process.
We improve the robustness against test time data augmentations and adversarial attacks on several classifications tasks.
arXiv Detail & Related papers (2024-06-25T11:38:46Z) - Improving Deep Learning-based Automatic Cranial Defect Reconstruction by Heavy Data Augmentation: From Image Registration to Latent Diffusion Models [0.2911706166691895]
The work is a considerable contribution to the field of artificial intelligence in the automatic modeling of personalized cranial implants.
We show that the use of heavy data augmentation significantly increases both the quantitative and qualitative outcomes.
We also show that the synthetically augmented network successfully reconstructs real clinical defects.
arXiv Detail & Related papers (2024-06-10T15:34:23Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Semantic Latent Space Regression of Diffusion Autoencoders for Vertebral
Fracture Grading [72.45699658852304]
This paper proposes a novel approach to train a generative Diffusion Autoencoder model as an unsupervised feature extractor.
We model fracture grading as a continuous regression, which is more reflective of the smooth progression of fractures.
Importantly, the generative nature of our method allows us to visualize different grades of a given vertebra, providing interpretability and insight into the features that contribute to automated grading.
arXiv Detail & Related papers (2023-03-21T17:16:01Z) - Unified Framework for Histopathology Image Augmentation and Classification via Generative Models [6.404713841079193]
We propose an innovative unified framework that integrates the data generation and model training stages into a unified process.
Our approach utilizes a pure Vision Transformer (ViT)-based conditional Generative Adversarial Network (cGAN) model to simultaneously handle both image synthesis and classification.
Our experiments show that our unified synthetic augmentation framework consistently enhances the performance of histopathology image classification models.
arXiv Detail & Related papers (2022-12-20T03:40:44Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.