Maximum Mean Discrepancy Kernels for Predictive and Prognostic Modeling
of Whole Slide Images
- URL: http://arxiv.org/abs/2301.09624v1
- Date: Mon, 23 Jan 2023 18:47:41 GMT
- Title: Maximum Mean Discrepancy Kernels for Predictive and Prognostic Modeling
of Whole Slide Images
- Authors: Piotr Keller, Muhammad Dawood, Fayyaz ul Amir Afsar Minhas
- Abstract summary: In computational pathology, Whole Slide Images (WSIs) of digitally scanned tissue samples from patients can be multi-gigapixels in size.
We explore a novel strategy based on kernel Maximumized Mean Discrepancy (MMD) analysis for determination of pairwise similarity between WSIs.
We believe that this work will open up further avenues for application of WSI-level kernels for predictive and prognostic tasks in computational pathology.
- Score: 1.418033127602866
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: How similar are two images? In computational pathology, where Whole Slide
Images (WSIs) of digitally scanned tissue samples from patients can be
multi-gigapixels in size, determination of degree of similarity between two
WSIs is a challenging task with a number of practical applications. In this
work, we explore a novel strategy based on kernelized Maximum Mean Discrepancy
(MMD) analysis for determination of pairwise similarity between WSIs. The
proposed approach works by calculating MMD between two WSIs using kernels over
deep features of image patches. This allows representation of an entire dataset
of WSIs as a kernel matrix for WSI level clustering, weakly-supervised
prediction of TP-53 mutation status in breast cancer patients from their
routine WSIs as well as survival analysis with state of the art prediction
performance. We believe that this work will open up further avenues for
application of WSI-level kernels for predictive and prognostic tasks in
computational pathology.
Related papers
- PathAlign: A vision-language model for whole slide images in histopathology [13.567674461880905]
We develop a vision-language model based on the BLIP-2 framework using WSIs and curated text from pathology reports.
This enables applications utilizing a shared image-text embedding space, such as text or image retrieval for finding cases of interest.
We present pathologist evaluation of text generation and text retrieval using WSI embeddings, as well as results for WSI classification and workflow prioritization.
arXiv Detail & Related papers (2024-06-27T23:43:36Z) - SPLICE -- Streamlining Digital Pathology Image Processing [0.7852714805965528]
We propose an unsupervised patching algorithm, Sequential Patching Lattice for Image Classification and Enquiry (SPLICE)
SPLICE condenses a histopathology WSI into a compact set of representative patches, forming a "collage" of WSI while minimizing redundancy.
As an unsupervised method, SPLICE effectively reduces storage requirements for representing tissue images by 50%.
arXiv Detail & Related papers (2024-04-26T21:30:36Z) - Adversary-Robust Graph-Based Learning of WSIs [2.9998889086656586]
Whole slide images (WSIs) are high-resolution, digitized versions of tissue samples mounted on glass slides, scanned using sophisticated imaging equipment.
The digital analysis of WSIs presents unique challenges due to their gigapixel size and multi-resolution storage format.
We develop a novel and innovative graph-based model which utilizes GNN to extract features from the graph representation of WSIs.
arXiv Detail & Related papers (2024-03-21T15:37:37Z) - A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Pay Attention with Focus: A Novel Learning Scheme for Classification of
Whole Slide Images [8.416553728391309]
We propose a novel two-stage approach to analyze whole slide images (WSIs)
First, we extract a set of representative patches (called mosaic) from a WSI.
Each patch of a mosaic is encoded to a feature vector using a deep network.
In the second stage, a set of encoded patch-level features from a WSI is used to compute the primary diagnosis probability.
arXiv Detail & Related papers (2021-06-11T21:59:02Z) - SOSD-Net: Joint Semantic Object Segmentation and Depth Estimation from
Monocular images [94.36401543589523]
We introduce the concept of semantic objectness to exploit the geometric relationship of these two tasks.
We then propose a Semantic Object and Depth Estimation Network (SOSD-Net) based on the objectness assumption.
To the best of our knowledge, SOSD-Net is the first network that exploits the geometry constraint for simultaneous monocular depth estimation and semantic segmentation.
arXiv Detail & Related papers (2021-01-19T02:41:03Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.