A Laplacian Pyramid Based Generative H&E Stain Augmentation Network
- URL: http://arxiv.org/abs/2305.14301v2
- Date: Fri, 14 Jul 2023 18:15:59 GMT
- Title: A Laplacian Pyramid Based Generative H&E Stain Augmentation Network
- Authors: Fangda Li, Zhiqiang Hu, Wen Chen, Avinash Kak
- Abstract summary: Generative Stain Augmentation Network (G-SAN) is a GAN-based framework that augments a collection of cell images with simulated stain variations.
Using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality.
- Score: 5.841841666625825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hematoxylin and Eosin (H&E) staining is a widely used sample preparation
procedure for enhancing the saturation of tissue sections and the contrast
between nuclei and cytoplasm in histology images for medical diagnostics.
However, various factors, such as the differences in the reagents used, result
in high variability in the colors of the stains actually recorded. This
variability poses a challenge in achieving generalization for machine-learning
based computer-aided diagnostic tools. To desensitize the learned models to
stain variations, we propose the Generative Stain Augmentation Network (G-SAN)
-- a GAN-based framework that augments a collection of cell images with
simulated yet realistic stain variations. At its core, G-SAN uses a novel and
highly computationally efficient Laplacian Pyramid (LP) based generator
architecture, that is capable of disentangling stain from cell morphology.
Through the task of patch classification and nucleus segmentation, we show that
using G-SAN-augmented training data provides on average 15.7% improvement in F1
score and 7.3% improvement in panoptic quality, respectively. Our code is
available at https://github.com/lifangda01/GSAN-Demo.
Related papers
- Stain Consistency Learning: Handling Stain Variation for Automatic
Digital Pathology Segmentation [3.2386272343130127]
We propose a novel framework combining stain-specific augmentation with a stain consistency loss function to learn stain colour invariant features.
We compare ten methods on Masson's trichrome and H&E stained cell and nuclei datasets, respectively.
We observed that stain normalisation methods resulted in equivalent or worse performance, while stain augmentation or stain adversarial methods demonstrated improved performance.
arXiv Detail & Related papers (2023-11-11T12:00:44Z) - Synthetic DOmain-Targeted Augmentation (S-DOTA) Improves Model
Generalization in Digital Pathology [1.488519799639108]
Machine learning algorithms have the potential to improve patient outcomes in digital pathology.
generalization is limited by sensitivity to variations in tissue preparation, staining procedures and scanning equipment.
We studied the effectiveness of two Synthetic DOmain-Targeted Augmentation (S-DOTA) methods, namely CycleGAN-enabled Scanner Transform (ST) and targeted Stain Vector Augmentation (SVA)
We evaluated the ability of these techniques to improve model generalization to various tasks and settings.
arXiv Detail & Related papers (2023-05-03T19:53:30Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - RandStainNA: Learning Stain-Agnostic Features from Histology Slides by
Bridging Stain Augmentation and Normalization [45.81689497433507]
Two proposals, namely stain normalization (SN) and stain augmentation (SA), have been spotlighted to reduce the generalization error.
To address the problems, we unify SN and SA with a novel RandStainNA scheme.
The RandStainNA constrains variable stain styles in a practicable range to train a stain agnostic deep learning model.
arXiv Detail & Related papers (2022-06-25T16:43:59Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Deep learning-based bias transfer for overcoming laboratory differences
of microscopic images [0.0]
We evaluate, compare, and improve existing generative model architectures to overcome domain shifts for immunofluorescence (IF) and Hematoxylin and Eosin (H&E) stained microscopy images.
Adapting the bias of the samples significantly improved the pixel-level segmentation for human kidney glomeruli and podocytes and improved the classification accuracy for human prostate biopsies by up to 14%.
arXiv Detail & Related papers (2021-05-25T09:02:30Z) - StyPath: Style-Transfer Data Augmentation For Robust Histology Image
Classification [6.690876060631452]
We propose a novel pipeline to build robust deep neural networks for AMR classification based on StyPath.
Each image was generated in 1.84 + 0.03 seconds using a single GTX V TITAN and pytorch.
Our results imply that our style-transfer augmentation technique improves histological classification performance.
arXiv Detail & Related papers (2020-07-09T18:02:49Z) - LeafGAN: An Effective Data Augmentation Method for Practical Plant
Disease Diagnosis [2.449909275410288]
LeafGAN generates a wide variety of diseased images via transformation from healthy images.
Thanks to its own attention mechanism, our model can transform only relevant areas from images with a variety of backgrounds.
arXiv Detail & Related papers (2020-02-24T07:36:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.