Synthesis of Annotated Colorectal Cancer Tissue Images from Gland Layout
- URL: http://arxiv.org/abs/2305.05006v2
- Date: Thu, 4 Apr 2024 22:51:42 GMT
- Title: Synthesis of Annotated Colorectal Cancer Tissue Images from Gland Layout
- Authors: Srijay Deshpande, Fayyaz Minhas, Nasir Rajpoot,
- Abstract summary: Synthetically generated images and annotations are valuable for training and evaluating algorithms in this domain.
We propose an interactive framework generating pairs of realistic colorectal cancer histology images with corresponding glandular masks from glandular structure layouts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating realistic tissue images with annotations is a challenging task that is important in many computational histopathology applications. Synthetically generated images and annotations are valuable for training and evaluating algorithms in this domain. To address this, we propose an interactive framework generating pairs of realistic colorectal cancer histology images with corresponding glandular masks from glandular structure layouts. The framework accurately captures vital features like stroma, goblet cells, and glandular lumen. Users can control gland appearance by adjusting parameters such as the number of glands, their locations, and sizes. The generated images exhibit good Frechet Inception Distance (FID) scores compared to the state-of-the-art image-to-image translation model. Additionally, we demonstrate the utility of our synthetic annotations for evaluating gland segmentation algorithms. Furthermore, we present a methodology for constructing glandular masks using advanced deep generative models, such as latent diffusion models. These masks enable tissue image generation through a residual encoder-decoder network.
Related papers
- PriorPath: Coarse-To-Fine Approach for Controlled De-Novo Pathology Semantic Masks Generation [0.0]
We present a pipeline, coined PriorPath, that generates detailed, realistic, semantic masks derived from coarse-grained images.
This approach enables control over the spatial arrangement of the generated masks and, consequently, the resulting synthetic images.
arXiv Detail & Related papers (2024-11-25T15:57:19Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - DEPAS: De-novo Pathology Semantic Masks using a Generative Model [0.0]
We introduce a scalable generative model, coined as DEPAS, that captures tissue structure and generates high-resolution semantic masks with state-of-the-art quality.
We demonstrate the ability of DEPAS to generate realistic semantic maps of tissue for three types of organs: skin, prostate, and lung.
arXiv Detail & Related papers (2023-02-13T16:48:33Z) - Deepfake histological images for enhancing digital pathology [0.40631409309544836]
We develop a generative adversarial network model that synthesizes pathology images constrained by class labels.
We investigate the ability of this framework in synthesizing realistic prostate and colon tissue images.
We extend the approach to significantly more complex images from colon biopsies and show that the complex microenvironment in such tissues can also be reproduced.
arXiv Detail & Related papers (2022-06-16T17:11:08Z) - Improving anatomical plausibility in medical image segmentation via
hybrid graph neural networks: applications to chest x-ray analysis [3.3382651833270587]
We introduce HybridGNet, an encoder-decoder neural architecture that leverages standard convolutions for image feature encoding and graph convolutional neural networks (GCNNs) to decode plausible representations of anatomical structures.
A novel image-to-graph skip connection layer allows localized features to flow from standard convolutional blocks to GCNN blocks, and show that it improves segmentation accuracy.
arXiv Detail & Related papers (2022-03-21T13:37:23Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - SAFRON: Stitching Across the Frontier for Generating Colorectal Cancer
Histology Images [2.486942181212742]
Synthetic images can be used for the development and evaluation of deep learning algorithms in the context of limited availability of data.
We propose a novel SAFRON framework to construct realistic, large high resolution tissue image tiles from ground truth annotations.
We show that the proposed method can generate realistic image tiles of arbitrarily large size after training it on relatively small image patches.
arXiv Detail & Related papers (2020-08-11T05:47:00Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z) - Gleason Grading of Histology Prostate Images through Semantic
Segmentation via Residual U-Net [60.145440290349796]
The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists.
Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue.
The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue.
arXiv Detail & Related papers (2020-05-22T19:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.