NASDM: Nuclei-Aware Semantic Histopathology Image Generation Using
Diffusion Models
- URL: http://arxiv.org/abs/2303.11477v1
- Date: Mon, 20 Mar 2023 22:16:03 GMT
- Title: NASDM: Nuclei-Aware Semantic Histopathology Image Generation Using
Diffusion Models
- Authors: Aman Shrivastava, P. Thomas Fletcher
- Abstract summary: First-of-its-kind nuclei-aware semantic tissue generation framework (NASDM)
NASDM can synthesize realistic tissue samples given a semantic instance mask of up to six different nuclei types.
These synthetic images are useful in applications in pathology, validation of models, and supplementation of existing nuclei segmentation datasets.
- Score: 3.2996723916635267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, computational pathology has seen tremendous progress driven
by deep learning methods in segmentation and classification tasks aiding
prognostic and diagnostic settings. Nuclei segmentation, for instance, is an
important task for diagnosing different cancers. However, training deep
learning models for nuclei segmentation requires large amounts of annotated
data, which is expensive to collect and label. This necessitates explorations
into generative modeling of histopathological images. In this work, we use
recent advances in conditional diffusion modeling to formulate a
first-of-its-kind nuclei-aware semantic tissue generation framework (NASDM)
which can synthesize realistic tissue samples given a semantic instance mask of
up to six different nuclei types, enabling pixel-perfect nuclei localization in
generated samples. These synthetic images are useful in applications in
pathology pedagogy, validation of models, and supplementation of existing
nuclei segmentation datasets. We demonstrate that NASDM is able to synthesize
high-quality histopathology images of the colon with superior quality and
semantic controllability over existing generative methods.
Related papers
- Comparative Analysis of Diffusion Generative Models in Computational Pathology [11.698817924231854]
Diffusion Generative Models (DGM) have rapidly surfaced as emerging topics in the field of computer vision.
This paper presents an in-depth comparative analysis of diffusion methods applied to a pathology dataset.
Our analysis extends to datasets with varying Fields of View (FOV), revealing that DGMs are highly effective in producing high-quality synthetic data.
arXiv Detail & Related papers (2024-11-24T05:09:43Z) - HistoSPACE: Histology-Inspired Spatial Transcriptome Prediction And Characterization Engine [0.0]
HistoSPACE model explore the diversity of histological images available with ST data to extract molecular insights from tissue image.
Model demonstrates significant efficiency compared to contemporary algorithms, revealing a correlation of 0.56 in leave-one-out cross-validation.
arXiv Detail & Related papers (2024-08-07T07:12:52Z) - Co-synthesis of Histopathology Nuclei Image-Label Pairs using a Context-Conditioned Joint Diffusion Model [3.677055050765245]
We introduce a novel framework for co-synthesizing histopathology nuclei images and paired semantic labels.
We demonstrate the effectiveness of our framework in generating high-quality samples on multi-institutional, multi-organ, and multi-modality datasets.
arXiv Detail & Related papers (2024-07-19T16:06:11Z) - Controllable and Efficient Multi-Class Pathology Nuclei Data Augmentation using Text-Conditioned Diffusion Models [4.1326413814647545]
We introduce a novel two-stage framework for multi-class nuclei data augmentation using text-conditional diffusion models.
In the first stage, we innovate nuclei label synthesis by generating multi-class semantic labels.
In the second stage, we utilize a semantic and text-conditional latent diffusion model to efficiently generate high-quality pathology images.
arXiv Detail & Related papers (2024-07-19T15:53:44Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - PathLDM: Text conditioned Latent Diffusion Model for Histopathology [62.970593674481414]
We introduce PathLDM, the first text-conditioned Latent Diffusion Model tailored for generating high-quality histopathology images.
Our approach fuses image and textual data to enhance the generation process.
We achieved a SoTA FID score of 7.64 for text-to-image generation on the TCGA-BRCA dataset, significantly outperforming the closest text-conditioned competitor with FID 30.1.
arXiv Detail & Related papers (2023-09-01T22:08:32Z) - Structure Embedded Nucleus Classification for Histopathology Images [51.02953253067348]
Most neural network based methods are affected by the local receptive field of convolutions.
We propose a novel polygon-structure feature learning mechanism that transforms a nucleus contour into a sequence of points sampled in order.
Next, we convert a histopathology image into a graph structure with nuclei as nodes, and build a graph neural network to embed the spatial distribution of nuclei into their representations.
arXiv Detail & Related papers (2023-02-22T14:52:06Z) - A Morphology Focused Diffusion Probabilistic Model for Synthesis of
Histopathology Images [0.5541644538483947]
Deep learning methods have made significant advances in the analysis and classification of tissue images.
These synthetic images have several applications in pathology including utilities in education, proficiency testing, privacy, and data sharing.
arXiv Detail & Related papers (2022-09-27T05:58:35Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.