An unobtrusive quality supervision approach for medical image annotation
- URL: http://arxiv.org/abs/2211.06146v1
- Date: Fri, 11 Nov 2022 11:57:26 GMT
- Title: An unobtrusive quality supervision approach for medical image annotation
- Authors: Sonja Kunzmann, Mathias \"Ottl, Prathmesh Madhu, Felix Denzinger,
Andreas Maier
- Abstract summary: It is desirable that users should annotate unseen data and have an automated system to unobtrusively rate their performance.
We evaluate two methods the generation of synthetic individual cell images: conditional Generative Adversarial Networks and Diffusion Models.
Users could not detect 52.12% of generated images by proofing the feasibility to replace the original cells with synthetic cells without being noticed.
- Score: 8.203076178571576
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Image annotation is one essential prior step to enable data-driven
algorithms. In medical imaging, having large and reliably annotated data sets
is crucial to recognize various diseases robustly. However, annotator
performance varies immensely, thus impacts model training. Therefore, often
multiple annotators should be employed, which is however expensive and
resource-intensive. Hence, it is desirable that users should annotate unseen
data and have an automated system to unobtrusively rate their performance
during this process. We examine such a system based on whole slide images
(WSIs) showing lung fluid cells. We evaluate two methods the generation of
synthetic individual cell images: conditional Generative Adversarial Networks
and Diffusion Models (DM). For qualitative and quantitative evaluation, we
conduct a user study to highlight the suitability of generated cells. Users
could not detect 52.12% of generated images by DM proofing the feasibility to
replace the original cells with synthetic cells without being noticed.
Related papers
- Style transfer as data augmentation: evaluating unpaired image-to-image translation models in mammography [0.0]
Deep learning models can learn to detect breast cancer from mammograms.
However, challenges with overfitting and poor generalisability prevent their routine use in the clinic.
Data augmentation techniques can be used to improve generalisability.
arXiv Detail & Related papers (2025-02-04T16:52:45Z) - Trustworthy image-to-image translation: evaluating uncertainty calibration in unpaired training scenarios [0.0]
Mammographic screening is an effective method for detecting breast cancer, facilitating early diagnosis.
Deep neural networks have been shown effective in some studies, but their tendency to overfit leaves considerable risk for poor generalisation and misdiagnosis.
Data augmentation schemes based on unpaired neural style transfer models have been proposed that improve generalisability.
We evaluate their performance when trained on image patches parsed from three open access mammography datasets and one non-medical image dataset.
arXiv Detail & Related papers (2025-01-29T11:09:50Z) - DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to optimize the diffusion model by providing pixel-level feedback.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image detectors [62.63467652611788]
We introduce SEMI-TRUTHS, featuring 27,600 real images, 223,400 masks, and 1,472,700 AI-augmented images.
Each augmented image is accompanied by metadata for standardized and targeted evaluation of detector robustness.
Our findings suggest that state-of-the-art detectors exhibit varying sensitivities to the types and degrees of perturbations, data distributions, and augmentation methods used.
arXiv Detail & Related papers (2024-11-12T01:17:27Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Ulcerative Colitis Mayo Endoscopic Scoring Classification with Active
Learning and Generative Data Augmentation [2.5241576779308335]
Deep learning based methods are effective in automated analysis of these images and can potentially be used to aid medical doctors.
In this paper, we propose a active learning based generative augmentation method.
The method involves generating a large number of synthetic samples by training using a small dataset consisting of real endoscopic images.
arXiv Detail & Related papers (2023-11-10T13:42:21Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Boosting Dermatoscopic Lesion Segmentation via Diffusion Models with
Visual and Textual Prompts [27.222844687360823]
We adapt the latest advance in the generative model, with the added control flow using lesion-specific visual and textual prompts.
It can achieve a 9% increase in the SSIM image quality measure and an over 5% increase in Dice coefficients over the prior arts.
arXiv Detail & Related papers (2023-10-04T15:43:26Z) - Realistic Data Enrichment for Robust Image Segmentation in
Histopathology [2.248423960136122]
We propose a new approach, based on diffusion models, which can enrich an imbalanced dataset with plausible examples from underrepresented groups.
Our method can simply expand limited clinical datasets making them suitable to train machine learning pipelines.
arXiv Detail & Related papers (2023-04-19T09:52:50Z) - Generation of Anonymous Chest Radiographs Using Latent Diffusion Models
for Training Thoracic Abnormality Classification Systems [7.909848251752742]
Biometric identifiers in chest radiographs hinder the public sharing of such data for research purposes.
This work employs a latent diffusion model to synthesize an anonymous chest X-ray dataset of high-quality class-conditional images.
arXiv Detail & Related papers (2022-11-02T17:43:02Z) - Seamless Iterative Semi-Supervised Correction of Imperfect Labels in
Microscopy Images [57.42492501915773]
In-vitro tests are an alternative to animal testing for the toxicity of medical devices.
Human fatigue plays a role in error making, making the use of deep learning appealing.
We propose Seamless Iterative Semi-Supervised correction of Imperfect labels (SISSI)
Our method successfully provides an adaptive early learning correction technique for object detection.
arXiv Detail & Related papers (2022-08-05T18:52:20Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.