An unobtrusive quality supervision approach for medical image annotation
- URL: http://arxiv.org/abs/2211.06146v1
- Date: Fri, 11 Nov 2022 11:57:26 GMT
- Title: An unobtrusive quality supervision approach for medical image annotation
- Authors: Sonja Kunzmann, Mathias \"Ottl, Prathmesh Madhu, Felix Denzinger,
Andreas Maier
- Abstract summary: It is desirable that users should annotate unseen data and have an automated system to unobtrusively rate their performance.
We evaluate two methods the generation of synthetic individual cell images: conditional Generative Adversarial Networks and Diffusion Models.
Users could not detect 52.12% of generated images by proofing the feasibility to replace the original cells with synthetic cells without being noticed.
- Score: 8.203076178571576
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Image annotation is one essential prior step to enable data-driven
algorithms. In medical imaging, having large and reliably annotated data sets
is crucial to recognize various diseases robustly. However, annotator
performance varies immensely, thus impacts model training. Therefore, often
multiple annotators should be employed, which is however expensive and
resource-intensive. Hence, it is desirable that users should annotate unseen
data and have an automated system to unobtrusively rate their performance
during this process. We examine such a system based on whole slide images
(WSIs) showing lung fluid cells. We evaluate two methods the generation of
synthetic individual cell images: conditional Generative Adversarial Networks
and Diffusion Models (DM). For qualitative and quantitative evaluation, we
conduct a user study to highlight the suitability of generated cells. Users
could not detect 52.12% of generated images by DM proofing the feasibility to
replace the original cells with synthetic cells without being noticed.
Related papers
- Style transfer as data augmentation: evaluating unpaired image-to-image translation models in mammography [0.0]
Deep learning models can learn to detect breast cancer from mammograms.
However, challenges with overfitting and poor generalisability prevent their routine use in the clinic.
Data augmentation techniques can be used to improve generalisability.
arXiv Detail & Related papers (2025-02-04T16:52:45Z) - Trustworthy image-to-image translation: evaluating uncertainty calibration in unpaired training scenarios [0.0]
Mammographic screening is an effective method for detecting breast cancer, facilitating early diagnosis.
Deep neural networks have been shown effective in some studies, but their tendency to overfit leaves considerable risk for poor generalisation and misdiagnosis.
Data augmentation schemes based on unpaired neural style transfer models have been proposed that improve generalisability.
We evaluate their performance when trained on image patches parsed from three open access mammography datasets and one non-medical image dataset.
arXiv Detail & Related papers (2025-01-29T11:09:50Z) - DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - DiffBoost: Enhancing Medical Image Segmentation via Text-Guided Diffusion Model [3.890243179348094]
Large-scale, big-variant, high-quality data are crucial for developing robust and successful deep-learning models for medical applications.
This paper proposes a novel approach by developing controllable diffusion models for medical image synthesis, called DiffBoost.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Realistic Data Enrichment for Robust Image Segmentation in
Histopathology [2.248423960136122]
We propose a new approach, based on diffusion models, which can enrich an imbalanced dataset with plausible examples from underrepresented groups.
Our method can simply expand limited clinical datasets making them suitable to train machine learning pipelines.
arXiv Detail & Related papers (2023-04-19T09:52:50Z) - Generation of Anonymous Chest Radiographs Using Latent Diffusion Models
for Training Thoracic Abnormality Classification Systems [7.909848251752742]
Biometric identifiers in chest radiographs hinder the public sharing of such data for research purposes.
This work employs a latent diffusion model to synthesize an anonymous chest X-ray dataset of high-quality class-conditional images.
arXiv Detail & Related papers (2022-11-02T17:43:02Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.