Diffusion-based generation of Histopathological Whole Slide Images at a
Gigapixel scale
- URL: http://arxiv.org/abs/2311.08199v1
- Date: Tue, 14 Nov 2023 14:33:39 GMT
- Title: Diffusion-based generation of Histopathological Whole Slide Images at a
Gigapixel scale
- Authors: Robert Harb, Thomas Pock, Heimo M\"uller
- Abstract summary: Synthetic Whole Slide Images (WSIs) can augment training datasets to enhance the performance of many computational applications.
No existing deep-learning-based method generates WSIs at their typically high resolutions.
We present a novel coarse-to-fine sampling scheme to tackle image generation of high-resolution WSIs.
- Score: 10.481781668319886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel diffusion-based approach to generate synthetic
histopathological Whole Slide Images (WSIs) at an unprecedented gigapixel
scale. Synthetic WSIs have many potential applications: They can augment
training datasets to enhance the performance of many computational pathology
applications. They allow the creation of synthesized copies of datasets that
can be shared without violating privacy regulations. Or they can facilitate
learning representations of WSIs without requiring data annotations. Despite
this variety of applications, no existing deep-learning-based method generates
WSIs at their typically high resolutions. Mainly due to the high computational
complexity. Therefore, we propose a novel coarse-to-fine sampling scheme to
tackle image generation of high-resolution WSIs. In this scheme, we increase
the resolution of an initial low-resolution image to a high-resolution WSI.
Particularly, a diffusion model sequentially adds fine details to images and
increases their resolution. In our experiments, we train our method with WSIs
from the TCGA-BRCA dataset. Additionally to quantitative evaluations, we also
performed a user study with pathologists. The study results suggest that our
generated WSIs resemble the structure of real WSIs.
Related papers
- Clustered Patch Embeddings for Permutation-Invariant Classification of Whole Slide Images [2.6733991338938026]
Whole Slide Imaging (WSI) is a cornerstone of digital pathology, offering detailed insights critical for diagnosis and research.
Yet, the gigapixel size of WSIs imposes significant computational challenges, limiting their practical utility.
Our novel approach addresses these challenges by leveraging various encoders for intelligent data reduction and employing a different classification model to ensure robust, permutation-invariant representations of WSIs.
arXiv Detail & Related papers (2024-11-13T11:25:05Z) - HSIGene: A Foundation Model For Hyperspectral Image Generation [46.745198868466545]
Hyperspectral image (HSI) plays a vital role in various fields such as agriculture and environmental monitoring.
Due to the expensive acquisition cost, the number of hyperspectral images is limited, degenerating the performance of downstream tasks.
We propose HSIGene, a novel HSI generation foundation model which is based on latent diffusion and supports multi-condition control.
Experiments demonstrate that the proposed model is capable of generating a vast quantity of realistic HSIs for downstream tasks such as denoising and super-resolution.
arXiv Detail & Related papers (2024-09-19T05:17:44Z) - SPLICE -- Streamlining Digital Pathology Image Processing [0.7852714805965528]
We propose an unsupervised patching algorithm, Sequential Patching Lattice for Image Classification and Enquiry (SPLICE)
SPLICE condenses a histopathology WSI into a compact set of representative patches, forming a "collage" of WSI while minimizing redundancy.
As an unsupervised method, SPLICE effectively reduces storage requirements for representing tissue images by 50%.
arXiv Detail & Related papers (2024-04-26T21:30:36Z) - A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - An efficient dual-branch framework via implicit self-texture enhancement for arbitrary-scale histopathology image super-resolution [18.881480825169053]
We propose an Implicit Self-Texture Enhancement-based dual-branch framework (ISTE) for arbitrary-scale SR of histopathology images.
ISTE outperforms existing fixed-scale and arbitrary-scale SR algorithms across various scaling factors.
arXiv Detail & Related papers (2024-01-28T10:00:45Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - ADASR: An Adversarial Auto-Augmentation Framework for Hyperspectral and
Multispectral Data Fusion [54.668445421149364]
Deep learning-based hyperspectral image (HSI) super-resolution aims to generate high spatial resolution HSI (HR-HSI) by fusing hyperspectral image (HSI) and multispectral image (MSI) with deep neural networks (DNNs)
In this letter, we propose a novel adversarial automatic data augmentation framework ADASR that automatically optimize and augments HSI-MSI sample pairs to enrich data diversity for HSI-MSI fusion.
arXiv Detail & Related papers (2023-10-11T07:30:37Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.