DEPAS: De-novo Pathology Semantic Masks using a Generative Model
- URL: http://arxiv.org/abs/2302.06513v1
- Date: Mon, 13 Feb 2023 16:48:33 GMT
- Title: DEPAS: De-novo Pathology Semantic Masks using a Generative Model
- Authors: Ariel Larey, Nati Daniel, Eliel Aknin, Yael Fisher, Yonatan Savir
- Abstract summary: We introduce a scalable generative model, coined as DEPAS, that captures tissue structure and generates high-resolution semantic masks with state-of-the-art quality.
We demonstrate the ability of DEPAS to generate realistic semantic maps of tissue for three types of organs: skin, prostate, and lung.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of artificial intelligence into digital pathology has the
potential to automate and improve various tasks, such as image analysis and
diagnostic decision-making. Yet, the inherent variability of tissues, together
with the need for image labeling, lead to biased datasets that limit the
generalizability of algorithms trained on them. One of the emerging solutions
for this challenge is synthetic histological images. However, debiasing real
datasets require not only generating photorealistic images but also the ability
to control the features within them. A common approach is to use generative
methods that perform image translation between semantic masks that reflect
prior knowledge of the tissue and a histological image. However, unlike other
image domains, the complex structure of the tissue prevents a simple creation
of histology semantic masks that are required as input to the image translation
model, while semantic masks extracted from real images reduce the process's
scalability. In this work, we introduce a scalable generative model, coined as
DEPAS, that captures tissue structure and generates high-resolution semantic
masks with state-of-the-art quality. We demonstrate the ability of DEPAS to
generate realistic semantic maps of tissue for three types of organs: skin,
prostate, and lung. Moreover, we show that these masks can be processed using a
generative image translation model to produce photorealistic histology images
of two types of cancer with two different types of staining techniques.
Finally, we harness DEPAS to generate multi-label semantic masks that capture
different cell types distributions and use them to produce histological images
with on-demand cellular features. Overall, our work provides a state-of-the-art
solution for the challenging task of generating synthetic histological images
while controlling their semantic information in a scalable way.
Related papers
- PriorPath: Coarse-To-Fine Approach for Controlled De-Novo Pathology Semantic Masks Generation [0.0]
We present a pipeline, coined PriorPath, that generates detailed, realistic, semantic masks derived from coarse-grained images.
This approach enables control over the spatial arrangement of the generated masks and, consequently, the resulting synthetic images.
arXiv Detail & Related papers (2024-11-25T15:57:19Z) - MFCLIP: Multi-modal Fine-grained CLIP for Generalizable Diffusion Face Forgery Detection [64.29452783056253]
The rapid development of photo-realistic face generation methods has raised significant concerns in society and academia.
Although existing approaches mainly capture face forgery patterns using image modality, other modalities like fine-grained noises and texts are not fully explored.
We propose a novel multi-modal fine-grained CLIP (MFCLIP) model, which mines comprehensive and fine-grained forgery traces across image-noise modalities.
arXiv Detail & Related papers (2024-09-15T13:08:59Z) - Mask-guided cross-image attention for zero-shot in-silico histopathologic image generation with a diffusion model [0.10910416614141322]
Diffusion models are the state-of-the-art solution for generating in-silico images.
Appearance transfer diffusion models are designed for natural images.
In computational pathology, specifically in oncology, it is not straightforward to define which objects in an image should be classified as foreground and background.
We contribute to the applicability of appearance transfer models to diffusion-stained images by modifying the appearance transfer guidance to alternate between class-specific AdaIN feature statistics matchings.
arXiv Detail & Related papers (2024-07-16T12:36:26Z) - Tissue-Contrastive Semi-Masked Autoencoders for Segmentation Pretraining on Chest CT [10.40407976789742]
We propose a new MIM method named Tissue-Contrastive Semi-Masked Autoencoder (TCS-MAE) for modeling chest CT images.
Our method has two novel designs: 1) a tissue-based masking-reconstruction strategy to capture more fine-grained anatomical features, and 2) a dual-AE architecture with contrastive learning between the masked and original image views.
arXiv Detail & Related papers (2024-07-12T03:24:17Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Between Generating Noise and Generating Images: Noise in the Correct
Frequency Improves the Quality of Synthetic Histopathology Images for Digital
Pathology [0.0]
Synthetic images can augment existing datasets, to improve and validate AI algorithms.
We show that introducing random single-pixel noise with the appropriate spatial frequency into a semantic mask can dramatically improve the quality of the synthetic images.
Our work suggests a simple and powerful approach for generating synthetic data on demand to unbias limited datasets.
arXiv Detail & Related papers (2023-02-13T17:49:24Z) - Deepfake histological images for enhancing digital pathology [0.40631409309544836]
We develop a generative adversarial network model that synthesizes pathology images constrained by class labels.
We investigate the ability of this framework in synthesizing realistic prostate and colon tissue images.
We extend the approach to significantly more complex images from colon biopsies and show that the complex microenvironment in such tissues can also be reproduced.
arXiv Detail & Related papers (2022-06-16T17:11:08Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.