Rapid Training Data Creation by Synthesizing Medical Images for
Classification and Localization
- URL: http://arxiv.org/abs/2308.04687v1
- Date: Wed, 9 Aug 2023 03:49:12 GMT
- Title: Rapid Training Data Creation by Synthesizing Medical Images for
Classification and Localization
- Authors: Abhishek Kushwaha, Sarthak Gupta, Anish Bhanushali, Tathagato Rai
Dastidar
- Abstract summary: We present a method for the transformation of real data to train any Deep Neural Network to solve the above problems.
For the weakly supervised model, we show that the localization accuracy increases significantly using the generated data.
In the latter model, we show that the accuracy, when trained with generated images, closely parallels the accuracy when trained with exhaustively annotated real images.
- Score: 10.506584969668792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While the use of artificial intelligence (AI) for medical image analysis is
gaining wide acceptance, the expertise, time and cost required to generate
annotated data in the medical field are significantly high, due to limited
availability of both data and expert annotation. Strongly supervised object
localization models require data that is exhaustively annotated, meaning all
objects of interest in an image are identified. This is difficult to achieve
and verify for medical images. We present a method for the transformation of
real data to train any Deep Neural Network to solve the above problems. We show
the efficacy of this approach on both a weakly supervised localization model
and a strongly supervised localization model. For the weakly supervised model,
we show that the localization accuracy increases significantly using the
generated data. For the strongly supervised model, this approach overcomes the
need for exhaustive annotation on real images. In the latter model, we show
that the accuracy, when trained with generated images, closely parallels the
accuracy when trained with exhaustively annotated real images. The results are
demonstrated on images of human urine samples obtained using microscopy.
Related papers
- Synthetic Augmentation for Anatomical Landmark Localization using DDPMs [0.22499166814992436]
diffusion-based generative models have recently started to gain attention for their ability to generate high-quality synthetic images.
We propose a novel way to assess the quality of the generated images using a Markov Random Field (MRF) model for landmark matching and a Statistical Shape Model (SSM) to check landmark plausibility.
arXiv Detail & Related papers (2024-10-16T12:09:38Z) - TSynD: Targeted Synthetic Data Generation for Enhanced Medical Image Classification [0.011037620731410175]
This work aims to guide the generative model to synthesize data with high uncertainty.
We alter the feature space of the autoencoder through an optimization process.
We improve the robustness against test time data augmentations and adversarial attacks on several classifications tasks.
arXiv Detail & Related papers (2024-06-25T11:38:46Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Could We Generate Cytology Images from Histopathology Images? An Empirical Study [1.791005104399795]
In this study, we have explored traditional image-to-image transfer models like CycleGAN, and Neural Style Transfer.
In this study, we have explored traditional image-to-image transfer models like CycleGAN, and Neural Style Transfer.
arXiv Detail & Related papers (2024-03-16T10:43:12Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Realistic Adversarial Data Augmentation for MR Image Segmentation [17.951034264146138]
We propose an adversarial data augmentation method for training neural networks for medical image segmentation.
Our model generates plausible and realistic signal corruptions, which models the intensity inhomogeneities caused by a common type of artefacts in MR imaging: bias field.
We show that such an approach can improve the ability generalization and robustness of models as well as provide significant improvements in low-data scenarios.
arXiv Detail & Related papers (2020-06-23T20:43:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.