Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in
Multivariate Image Data
- URL: http://arxiv.org/abs/2110.04875v1
- Date: Sun, 10 Oct 2021 18:34:13 GMT
- Title: Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in
Multivariate Image Data
- Authors: Jared Jessup (1 and 2), Robert Krueger (1 and 2 and 3), Simon Warchol
(2), John Hoffer (3), Jeremy Muhlich (3), Cecily C. Ritch (4), Giorgio Gaglia
(4), Shannon Coy (4), Yu-An Chen (3), Jia-Ren Lin (3), Sandro Santagata (4),
Peter K. Sorger (3), Hanspeter Pfister (1) ((1) Robert Krueger and Jared
Jessup contributed equally to this work, (2) School of Engineering and
Applied Sciences, Harvard University, (3) Laboratory of Systems Pharmacology,
Harvard Medical School, (4) Brigham and Women's Hospital, Harvard Medical
School)
- Abstract summary: Scope2Screen is a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images.
Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of cells.
We present interactive lensing techniques that operate at single-cell and tissue levels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspection of tissues using a light microscope is the primary method of
diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging
builds on this foundation, enabling the collection of up to 60 channels of
molecular information plus cell and tissue morphology using antibody staining.
This provides unique insight into disease biology and promises to help with the
design of patient-specific therapies. However, a substantial gap remains with
respect to visualizing the resulting multivariate image data and effectively
supporting pathology workflows in digital environments on screen. We,
therefore, developed Scope2Screen, a scalable software system for focus+context
exploration and annotation of whole-slide, high-plex, tissue images. Our
approach scales to analyzing 100GB images of 10^9 or more pixels per channel,
containing millions of cells. A multidisciplinary team of visualization
experts, microscopists, and pathologists identified key image exploration and
annotation tasks involving finding, magnifying, quantifying, and organizing
ROIs in an intuitive and cohesive manner. Building on a scope2screen metaphor,
we present interactive lensing techniques that operate at single-cell and
tissue levels. Lenses are equipped with task-specific functionality and
descriptive statistics, making it possible to analyze image features, cell
types, and spatial arrangements (neighborhoods) across image channels and
scales. A fast sliding-window search guides users to regions similar to those
under the lens; these regions can be analyzed and considered either separately
or as part of a larger image collection. A novel snapshot method enables linked
lens configurations and image statistics to be saved, restored, and shared. We
validate our designs with domain experts and apply Scope2Screen in two case
studies involving lung and colorectal cancers to discover cancer-relevant image
features.
Related papers
- Multiplex Imaging Analysis in Pathology: a Comprehensive Review on Analytical Approaches and Digital Toolkits [0.7968706282619793]
Multi multiplexed imaging allows for simultaneous visualization of multiple biomarkers in a single section.
Data from multiplexed imaging requires sophisticated computational methods for preprocessing, segmentation, feature extraction, and spatial analysis.
PathML is an AI-powered platform that streamlines image analysis, making complex interpretation accessible for clinical and research settings.
arXiv Detail & Related papers (2024-11-01T18:02:41Z) - Automated Segmentation and Analysis of Cone Photoreceptors in Multimodal Adaptive Optics Imaging [3.7243418909643093]
We used confocal and non-confocal split detector images to analyze photoreceptors for improved accuracy.
We explored two U-Net-based segmentation models: StarDist for confocal and Cellpose for calculated modalities.
arXiv Detail & Related papers (2024-10-19T17:10:38Z) - DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - FMDNN: A Fuzzy-guided Multi-granular Deep Neural Network for Histopathological Image Classification [40.94024666952439]
We propose the Fuzzy-guided Multi-granularity Deep Neural Network (FMDNN)
Inspired by the multi-granular diagnostic approach of pathologists, we perform feature extraction on cell structures at coarse, medium, and fine granularity.
A fuzzy-guided cross-attention module guides universal fuzzy features toward multi-granular features.
arXiv Detail & Related papers (2024-07-22T00:46:15Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - PCRLv2: A Unified Visual Information Preservation Framework for
Self-supervised Pre-training in Medical Image Analysis [56.63327669853693]
We propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics.
We also address the preservation of scale information, a powerful tool in aiding image understanding.
The proposed unified SSL framework surpasses its self-supervised counterparts on various tasks.
arXiv Detail & Related papers (2023-01-02T17:47:27Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Microscopic fine-grained instance classification through deep attention [7.50282814989294]
Fine-grained classification of microscopic image data with limited samples is an open problem in computer vision and biomedical imaging.
We propose a simple yet effective deep network that performs two tasks simultaneously in an end-to-end manner.
The result is a robust but lightweight end-to-end trainable deep network that yields state-of-the-art results.
arXiv Detail & Related papers (2020-10-06T15:29:58Z) - Selecting Regions of Interest in Large Multi-Scale Images for Cancer
Pathology [0.0]
High resolution scans of microscopy slides offer enough information for a cancer pathologist to come to a conclusion regarding cancer presence, subtype, and severity based on measurements of features within the slide image at multiple scales and resolutions.
We explore approaches based on Reinforcement Learning and Beam Search to learn to progressively zoom into the WSI to detect Regions of Interest (ROIs) in liver pathology slides containing one of two types of liver cancer, namely Hepatocellular Carcinoma (HCC) and Cholangiocarcinoma (CC)
These ROIs can then be presented directly to the pathologist to aid in measurement and diagnosis or be used
arXiv Detail & Related papers (2020-07-03T15:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.