Visual attention analysis of pathologists examining whole slide images
of Prostate cancer
- URL: http://arxiv.org/abs/2202.08437v1
- Date: Thu, 17 Feb 2022 04:01:43 GMT
- Title: Visual attention analysis of pathologists examining whole slide images
of Prostate cancer
- Authors: Souradeep Chakraborty, Ke Ma, Rajarsi Gupta, Beatrice Knudsen, Gregory
J. Zelinsky, Joel H. Saltz, Dimitris Samaras
- Abstract summary: We study the attention of pathologists as they examine whole-slide images (WSIs) of prostate cancer tissue using a digital microscope.
We collected slide navigation data from 13 pathologists in 2 groups (5 genitourinary (GU) specialists and 8 general pathologists) and generated visual attention heatmaps and scanpaths.
To quantify the relationship between a pathologist's attention and evidence for cancer in the WSI, we obtained tumor annotations from a genitourinary specialist.
- Score: 29.609319636136426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the attention of pathologists as they examine whole-slide images
(WSIs) of prostate cancer tissue using a digital microscope. To the best of our
knowledge, our study is the first to report in detail how pathologists navigate
WSIs of prostate cancer as they accumulate information for their diagnoses. We
collected slide navigation data (i.e., viewport location, magnification level,
and time) from 13 pathologists in 2 groups (5 genitourinary (GU) specialists
and 8 general pathologists) and generated visual attention heatmaps and
scanpaths. Each pathologist examined five WSIs from the TCGA PRAD dataset,
which were selected by a GU pathology specialist. We examined and analyzed the
distributions of visual attention for each group of pathologists after each WSI
was examined. To quantify the relationship between a pathologist's attention
and evidence for cancer in the WSI, we obtained tumor annotations from a
genitourinary specialist. We used these annotations to compute the overlap
between the distribution of visual attention and annotated tumor region to
identify strong correlations. Motivated by this analysis, we trained a deep
learning model to predict visual attention on unseen WSIs. We find that the
attention heatmaps predicted by our model correlate quite well with the ground
truth attention heatmap and tumor annotations on a test set of 17 WSIs by using
various spatial and temporal evaluation metrics.
Related papers
- Shifts in Doctors' Eye Movements Between Real and AI-Generated Medical Images [5.969442345531191]
Eye-tracking analysis plays a vital role in medical imaging, providing key insights into how radiologists visually interpret and diagnose clinical cases.
We first analyze radiologists' attention and agreement by measuring the distribution of various eye-movement patterns, including saccades direction, amplitude, and their joint distribution.
We investigate whether and how doctors' gaze behavior shifts when viewing authentic (Real) versus deep-learning-generated (Fake) images.
arXiv Detail & Related papers (2025-04-21T10:13:59Z) - MIL vs. Aggregation: Evaluating Patient-Level Survival Prediction Strategies Using Graph-Based Learning [52.231128973251124]
We compare various strategies for predicting survival at the WSI and patient level.
The former treats each WSI as an independent sample, mimicking the strategy adopted in other works.
The latter comprises methods to either aggregate the predictions of the several WSIs or automatically identify the most relevant slide.
arXiv Detail & Related papers (2025-03-29T11:14:02Z) - Towards a Comprehensive Benchmark for Pathological Lymph Node Metastasis in Breast Cancer Sections [21.75452517154339]
We reprocessed 1,399 whole slide images (WSIs) and labels from the Camelyon-16 and Camelyon-17 datasets.
Based on the sizes of re-annotated tumor regions, we upgraded the binary cancer screening task to a four-class task.
arXiv Detail & Related papers (2024-11-16T09:19:24Z) - Anatomy-guided Pathology Segmentation [56.883822515800205]
We develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features.
Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy.
In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods.
arXiv Detail & Related papers (2024-07-08T11:44:15Z) - Decoding the visual attention of pathologists to reveal their level of expertise [20.552161727506235]
We present a method for classifying the expertise of a pathologist based on how they allocated their attention during a cancer reading.
Based solely on a pathologist's attention during a reading, our model was able to predict their level of expertise with 75.3%, 56.1%, and 77.2% accuracy.
arXiv Detail & Related papers (2024-03-25T23:03:51Z) - Beyond attention: deriving biologically interpretable insights from
weakly-supervised multiple-instance learning models [2.639541396835675]
We introduce prediction-attention-weighted (PAW) maps by combining tile-level attention and prediction scores produced by a refined encoder.
We also introduce a biological feature instantiation technique by integrating PAW maps with nuclei segmentation masks.
Our approach reveals that regions that are predictive of adverse prognosis do not tend to co-locate with the tumour regions.
arXiv Detail & Related papers (2023-09-07T09:44:35Z) - A Pathologist-Informed Workflow for Classification of Prostate Glands in
Histopathology [62.997667081978825]
Pathologists diagnose and grade prostate cancer by examining tissue from needle biopsies on glass slides.
Cancer's severity and risk of metastasis are determined by the Gleason grade, a score based on the organization and morphology of prostate cancer glands.
This paper proposes an automated workflow that follows pathologists' textitmodus operandi, isolating and classifying multi-scale patches of individual glands.
arXiv Detail & Related papers (2022-09-27T14:08:19Z) - Contrastive learning-based computational histopathology predict
differential expression of cancer driver genes [13.167222116204226]
HistCode is a self-supervised contrastive learning framework to infer differential gene expressions from whole slide images.
Our experiments showed that our method outperformed other state-of-the-art models in tumor diagnosis tasks.
arXiv Detail & Related papers (2022-04-25T23:21:33Z) - MHAttnSurv: Multi-Head Attention for Survival Prediction Using
Whole-Slide Pathology Images [4.148207298604488]
We developed a multi-head attention approach to focus on various parts of a tumor slide, for more comprehensive information extraction from WSIs.
Our model achieved an average c-index of 0.640, outperforming two existing state-of-the-art approaches for WSI-based survival prediction.
arXiv Detail & Related papers (2021-10-22T02:18:27Z) - Machine Learning Methods for Histopathological Image Analysis: A Review [62.14548392474976]
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis.
One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems.
arXiv Detail & Related papers (2021-02-07T19:12:32Z) - Gleason Grading of Histology Prostate Images through Semantic
Segmentation via Residual U-Net [60.145440290349796]
The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists.
Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue.
The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue.
arXiv Detail & Related papers (2020-05-22T19:49:10Z) - Spatio-spectral deep learning methods for in-vivo hyperspectral
laryngeal cancer detection [49.32653090178743]
Early detection of head and neck tumors is crucial for patient survival.
Hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors.
We present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI.
arXiv Detail & Related papers (2020-04-21T17:07:18Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.