Interpretability analysis on a pathology foundation model reveals biologically relevant embeddings across modalities
- URL: http://arxiv.org/abs/2407.10785v1
- Date: Mon, 15 Jul 2024 15:03:01 GMT
- Title: Interpretability analysis on a pathology foundation model reveals biologically relevant embeddings across modalities
- Authors: Nhat Le, Ciyue Shen, Chintan Shah, Blake Martin, Daniel Shenker, Harshith Padigela, Jennifer Hipp, Sean Grullon, John Abel, Harsha Vardhan Pokkalla, Dinkar Juyal,
- Abstract summary: We analyze the features from a ViT-Small encoder obtained from a pathology Foundation Model via application to two datasets.
We discover an interpretable representation of cell and tissue morphology, along with gene expression within the model embedding space.
- Score: 1.4602325266401266
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mechanistic interpretability has been explored in detail for large language models (LLMs). For the first time, we provide a preliminary investigation with similar interpretability methods for medical imaging. Specifically, we analyze the features from a ViT-Small encoder obtained from a pathology Foundation Model via application to two datasets: one dataset of pathology images, and one dataset of pathology images paired with spatial transcriptomics. We discover an interpretable representation of cell and tissue morphology, along with gene expression within the model embedding space. Our work paves the way for further exploration around interpretable feature dimensions and their utility for medical and clinical applications.
Related papers
- Anatomy-guided Pathology Segmentation [56.883822515800205]
We develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features.
Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy.
In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods.
arXiv Detail & Related papers (2024-07-08T11:44:15Z) - PLUTO: Pathology-Universal Transformer [4.920983796208486]
We propose PathoLogy Universal TransfOrmer (PLUTO): a light-weight pathology FM that is pre-trained on a diverse dataset of 195 million image tiles.
We design task-specific adaptation heads that utilize PLUTO's output embeddings for tasks which span pathology scales.
We find that PLUTO matches or outperforms existing task-specific baselines and pathology-specific foundation models.
arXiv Detail & Related papers (2024-05-13T16:40:17Z) - HistoSegCap: Capsules for Weakly-Supervised Semantic Segmentation of
Histological Tissue Type in Whole Slide Images [19.975420988169454]
Digital pathology involves converting physical tissue slides into high-resolution Whole Slide Images (WSIs)
Large histology slides with numerous microscopic fields pose challenges for visual search.
Computer Aided Diagnosis (CAD) systems offer visual assistance in efficiently examining WSIs and identifying diagnostically relevant regions.
arXiv Detail & Related papers (2024-02-16T17:44:11Z) - Spatially Resolved Gene Expression Prediction from H&E Histology Images
via Bi-modal Contrastive Learning [4.067498002241427]
We present BLEEP (Bi-modaL Embedding for Expression Prediction), a bi-modal embedding framework capable of generating spatially resolved gene expression profiles.
BLEEP uses contrastive learning to construct a low-dimensional joint embedding space from a reference dataset using paired image and expression profiles at micrometer resolution.
We demonstrate BLEEP's effectiveness in gene expression prediction by benchmarking its performance on a human liver tissue dataset captured using the 10x Visium platform.
arXiv Detail & Related papers (2023-06-02T18:27:26Z) - Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction [3.2274401541163322]
We propose a memory-efficient multimodal Transformer that can model interactions between pathway and histology patch tokens.
Our proposed model, SURVPATH, achieves state-of-the-art performance when evaluated against both unimodal and multimodal baselines.
arXiv Detail & Related papers (2023-04-13T21:02:32Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - HistoCartography: A Toolkit for Graph Analytics in Digital Pathology [0.6299766708197883]
HistoCartography is a standardized python API with necessary preprocessing, machine learning and explainability tools to facilitate graph-analytics in computational pathology.
We have benchmarked the computational time and performance on multiple datasets across different imaging types and histopathology tasks.
arXiv Detail & Related papers (2021-07-21T13:34:14Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.