Conditioned Generative Transformers for Histopathology Image Synthetic
Augmentation
- URL: http://arxiv.org/abs/2212.09977v1
- Date: Tue, 20 Dec 2022 03:40:44 GMT
- Title: Conditioned Generative Transformers for Histopathology Image Synthetic
Augmentation
- Authors: Meng Li, Chaoyi Li, Can Peng, Brian Lovell
- Abstract summary: Vision transformer (ViT) based generative adversarial networks (GANs) recently demonstrated superior potential in general image synthesis.
We propose a pure ViT-based conditional GAN model for histopathology image synthetic augmentation.
- Score: 3.1616973611119494
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning networks have demonstrated state-of-the-art performance on
medical image analysis tasks. However, the majority of the works rely heavily
on abundantly labeled data, which necessitates extensive involvement of domain
experts. Vision transformer (ViT) based generative adversarial networks (GANs)
recently demonstrated superior potential in general image synthesis, yet are
less explored for histopathology images. In this paper, we address these
challenges by proposing a pure ViT-based conditional GAN model for
histopathology image synthetic augmentation. To alleviate training instability
and improve generation robustness, we first introduce a conditioned class
projection method to facilitate class separation. We then implement a
multi-loss weighing function to dynamically balance the losses between
classification tasks. We further propose a selective augmentation mechanism to
actively choose the appropriate generated images and bring additional
performance improvements. Extensive experiments on the histopathology datasets
show that leveraging our synthetic augmentation framework results in
significant and consistent improvements in classification performance.
Related papers
- Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization [0.13108652488669734]
We propose a novel generative method for domain generalization in histopathology images.
Our method employs a generative, self-supervised Vision Transformer to dynamically extract characteristics of image patches.
Experiments conducted on two distinct histopathology datasets demonstrate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-07-03T08:20:27Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - ViT-DAE: Transformer-driven Diffusion Autoencoder for Histopathology
Image Analysis [4.724009208755395]
We present ViT-DAE, which integrates vision transformers (ViT) and diffusion autoencoders for high-quality histopathology image synthesis.
Our approach outperforms recent GAN-based and vanilla DAE methods in generating realistic images.
arXiv Detail & Related papers (2023-04-03T15:00:06Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - A Self-attention Guided Multi-scale Gradient GAN for Diversified X-ray
Image Synthesis [0.6308539010172307]
Generative Adversarial Networks (GANs) are utilized to address the data limitation problem via the generation of synthetic images.
Training challenges such as mode collapse, non-convergence, and instability degrade a GAN's performance in synthesizing diversified and high-quality images.
This work proposes an attention-guided multi-scale gradient GAN architecture to model the relationship between long-range dependencies of biomedical image features.
arXiv Detail & Related papers (2022-10-09T13:17:17Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Class-Aware Generative Adversarial Transformers for Medical Image
Segmentation [39.14169989603906]
We present CA-GANformer, a novel type of generative adversarial transformers, for medical image segmentation.
First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations.
We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures.
arXiv Detail & Related papers (2022-01-26T03:50:02Z) - You Only Need Adversarial Supervision for Semantic Image Synthesis [84.83711654797342]
We propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results.
We show that images synthesized by our model are more diverse and follow the color and texture of real images more closely.
arXiv Detail & Related papers (2020-12-08T23:00:48Z) - Image Augmentations for GAN Training [57.65145659417266]
We provide insights and guidelines on how to augment images for both vanilla GANs and GANs with regularizations.
Surprisingly, we find that vanilla GANs attain generation quality on par with recent state-of-the-art results.
arXiv Detail & Related papers (2020-06-04T00:16:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.