Multi-Modality Microscopy Image Style Transfer for Nuclei Segmentation
- URL: http://arxiv.org/abs/2111.12138v1
- Date: Tue, 23 Nov 2021 20:19:20 GMT
- Title: Multi-Modality Microscopy Image Style Transfer for Nuclei Segmentation
- Authors: Ye Liu, Sophia J. Wagner, Tingying Peng
- Abstract summary: We propose a microscopy-style augmentation technique based on a generative adversarial network (GAN)
Unlike other style transfer methods, it can not only deal with different cell assay types and lighting conditions, but also with different imaging modalities.
We evaluate our data augmentation on the 2018 Data Science Bowl dataset consisting of various cell assays, lighting conditions, and imaging modalities.
- Score: 3.535158633337794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Annotating microscopy images for nuclei segmentation is laborious and
time-consuming. To leverage the few existing annotations, also across multiple
modalities, we propose a novel microscopy-style augmentation technique based on
a generative adversarial network (GAN). Unlike other style transfer methods, it
can not only deal with different cell assay types and lighting conditions, but
also with different imaging modalities, such as bright-field and fluorescence
microscopy. Using disentangled representations for content and style, we can
preserve the structure of the original image while altering its style during
augmentation. We evaluate our data augmentation on the 2018 Data Science Bowl
dataset consisting of various cell assays, lighting conditions, and imaging
modalities. With our style augmentation, the segmentation accuracy of the two
top-ranked Mask R-CNN-based nuclei segmentation algorithms in the competition
increases significantly. Thus, our augmentation technique renders the
downstream task more robust to the test data heterogeneity and helps counteract
class imbalance without resampling of minority classes.
Related papers
- Practical Guidelines for Cell Segmentation Models Under Optical Aberrations in Microscopy [14.042884268397058]
This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy.
We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads.
In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions.
arXiv Detail & Related papers (2024-04-12T15:45:26Z) - Additional Look into GAN-based Augmentation for Deep Learning COVID-19
Image Classification [57.1795052451257]
We study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples.
We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems.
The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets.
arXiv Detail & Related papers (2024-01-26T08:28:13Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Focus on Content not Noise: Improving Image Generation for Nuclei
Segmentation by Suppressing Steganography in CycleGAN [1.564260789348333]
We propose to remove the hidden shortcut information, called steganography, from generated images by employing a low pass filtering based on the DCT.
We achieve an improvement of 5.4 percentage points in the F1-score compared to a vanilla CycleGAN.
arXiv Detail & Related papers (2023-08-03T13:58:37Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Search for temporal cell segmentation robustness in phase-contrast
microscopy videos [31.92922565397439]
In this work, we present a deep learning-based workflow to segment cancer cells embedded in 3D collagen matrices.
We also propose a geometrical-characterization approach to studying cancer cell morphology.
We introduce a new annotated dataset for 2D cell segmentation and tracking, and an open-source implementation to replicate the experiments or adapt them to new image processing problems.
arXiv Detail & Related papers (2021-12-16T12:03:28Z) - From augmented microscopy to the topological transformer: a new approach
in cell image analysis for Alzheimer's research [0.0]
Cell image analysis is crucial in Alzheimer's research to detect the presence of A$beta$ protein inhibiting cell function.
We first found Unet is most suitable in augmented microscopy by comparing performance in multi-class semantics segmentation.
We develop the augmented microscopy method to capture nuclei in a brightfield image and the transformer using Unet model to convert an input image into a sequence of topological information.
arXiv Detail & Related papers (2021-08-03T16:59:33Z) - Learning to segment clustered amoeboid cells from brightfield microscopy
via multi-task learning with adaptive weight selection [6.836162272841265]
We introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm.
A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network.
We observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least $5.8%$ on average.
arXiv Detail & Related papers (2020-05-19T11:31:53Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.