BioGAN: An unpaired GAN-based image to image translation model for
microbiological images
- URL: http://arxiv.org/abs/2306.06217v1
- Date: Fri, 9 Jun 2023 19:30:49 GMT
- Title: BioGAN: An unpaired GAN-based image to image translation model for
microbiological images
- Authors: Saber Mirzaee Bafti, Chee Siang Ang, Gianluca Marcelli, Md. Moinul
Hossain, Sadiya Maxamhud, Anastasios D. Tsaousis
- Abstract summary: We develop an unpaired GAN-based (Generative Adversarial Network) image to image translation model for microbiological images.
We propose a novel design for a GAN model, BioGAN, by utilizing Adversarial and Perceptual loss in order to transform high level features of laboratory-taken images into field images.
- Score: 1.6427658855248812
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A diversified dataset is crucial for training a well-generalized supervised
computer vision algorithm. However, in the field of microbiology, generation
and annotation of a diverse dataset including field-taken images are time
consuming, costly, and in some cases impossible. Image to image translation
frameworks allow us to diversify the dataset by transferring images from one
domain to another. However, most existing image translation techniques require
a paired dataset (original image and its corresponding image in the target
domain), which poses a significant challenge in collecting such datasets. In
addition, the application of these image translation frameworks in microbiology
is rarely discussed. In this study, we aim to develop an unpaired GAN-based
(Generative Adversarial Network) image to image translation model for
microbiological images, and study how it can improve generalization ability of
object detection models. In this paper, we present an unpaired and unsupervised
image translation model to translate laboratory-taken microbiological images to
field images, building upon the recent advances in GAN networks and Perceptual
loss function. We propose a novel design for a GAN model, BioGAN, by utilizing
Adversarial and Perceptual loss in order to transform high level features of
laboratory-taken images into field images, while keeping their spatial
features. The contribution of Adversarial and Perceptual loss in the generation
of realistic field images were studied. We used the synthetic field images,
generated by BioGAN, to train an object-detection framework, and compared the
results with those of an object-detection framework trained with laboratory
images; this resulted in up to 68.1% and 75.3% improvement on F1-score and mAP,
respectively. Codes is publicly available at
https://github.com/Kahroba2000/BioGAN.
Related papers
- Unleashing the Potential of Synthetic Images: A Study on Histopathology Image Classification [0.12499537119440242]
Histopathology image classification is crucial for the accurate identification and diagnosis of various diseases.
We show that synthetic images can effectively augment existing datasets, ultimately improving the performance of the downstream histopathology image classification task.
arXiv Detail & Related papers (2024-09-24T12:02:55Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Could We Generate Cytology Images from Histopathology Images? An Empirical Study [1.791005104399795]
In this study, we have explored traditional image-to-image transfer models like CycleGAN, and Neural Style Transfer.
In this study, we have explored traditional image-to-image transfer models like CycleGAN, and Neural Style Transfer.
arXiv Detail & Related papers (2024-03-16T10:43:12Z) - BiomedJourney: Counterfactual Biomedical Image Generation by
Instruction-Learning from Multimodal Patient Journeys [99.7082441544384]
We present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning.
We use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression.
The resulting triples are then used to train a latent diffusion model for counterfactual biomedical image generation.
arXiv Detail & Related papers (2023-10-16T18:59:31Z) - Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle
Phenotypes [0.5076419064097732]
We present an improved CycleGAN architecture that employs self-supervised discriminators to alleviate the need for numerous images.
We also provide results obtained with small biological datasets on obvious and non-obvious cell phenotype variations.
arXiv Detail & Related papers (2023-01-21T16:25:04Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Deep learning-based bias transfer for overcoming laboratory differences
of microscopic images [0.0]
We evaluate, compare, and improve existing generative model architectures to overcome domain shifts for immunofluorescence (IF) and Hematoxylin and Eosin (H&E) stained microscopy images.
Adapting the bias of the samples significantly improved the pixel-level segmentation for human kidney glomeruli and podocytes and improved the classification accuracy for human prostate biopsies by up to 14%.
arXiv Detail & Related papers (2021-05-25T09:02:30Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Image Synthesis with Adversarial Networks: a Comprehensive Survey and
Case Studies [41.00383742615389]
Generative Adversarial Networks (GANs) have been extremely successful in various application domains such as computer vision, medicine, and natural language processing.
GANs are powerful models for learning complex distributions to synthesize semantically meaningful samples.
Given the current fast GANs development, in this survey, we provide a comprehensive review of adversarial models for image synthesis.
arXiv Detail & Related papers (2020-12-26T13:30:42Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.