Decoding the visual attention of pathologists to reveal their level of expertise
- URL: http://arxiv.org/abs/2403.17255v1
- Date: Mon, 25 Mar 2024 23:03:51 GMT
- Title: Decoding the visual attention of pathologists to reveal their level of expertise
- Authors: Souradeep Chakraborty, Dana Perez, Paul Friedman, Natallia Sheuka, Constantin Friedman, Oksana Yaskiv, Rajarsi Gupta, Gregory J. Zelinsky, Joel H. Saltz, Dimitris Samaras,
- Abstract summary: We present a method for classifying the expertise of a pathologist based on how they allocated their attention during a cancer reading.
Based solely on a pathologist's attention during a reading, our model was able to predict their level of expertise with 75.3%, 56.1%, and 77.2% accuracy.
- Score: 20.552161727506235
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a method for classifying the expertise of a pathologist based on how they allocated their attention during a cancer reading. We engage this decoding task by developing a novel method for predicting the attention of pathologists as they read whole-slide Images (WSIs) of prostate and make cancer grade classifications. Our ground truth measure of a pathologists' attention is the x, y and z (magnification) movement of their viewport as they navigated through WSIs during readings, and to date we have the attention behavior of 43 pathologists reading 123 WSIs. These data revealed that specialists have higher agreement in both their attention and cancer grades compared to general pathologists and residents, suggesting that sufficient information may exist in their attention behavior to classify their expertise level. To attempt this, we trained a transformer-based model to predict the visual attention heatmaps of resident, general, and specialist (GU) pathologists during Gleason grading. Based solely on a pathologist's attention during a reading, our model was able to predict their level of expertise with 75.3%, 56.1%, and 77.2% accuracy, respectively, better than chance and baseline models. Our model therefore enables a pathologist's expertise level to be easily and objectively evaluated, important for pathology training and competency assessment. Tools developed from our model could also be used to help pathology trainees learn how to read WSIs like an expert.
Related papers
- MIL vs. Aggregation: Evaluating Patient-Level Survival Prediction Strategies Using Graph-Based Learning [52.231128973251124]
We compare various strategies for predicting survival at the WSI and patient level.
The former treats each WSI as an independent sample, mimicking the strategy adopted in other works.
The latter comprises methods to either aggregate the predictions of the several WSIs or automatically identify the most relevant slide.
arXiv Detail & Related papers (2025-03-29T11:14:02Z) - Doctor-in-the-Loop: An Explainable, Multi-View Deep Learning Framework for Predicting Pathological Response in Non-Small Cell Lung Cancer [0.6800826356148091]
Non-small cell lung cancer (NSCLC) remains a major global health challenge.
We propose Doctor-in-the-Loop, a novel framework that integrates expert-driven domain knowledge with explainable artificial intelligence techniques.
Our approach employs a gradual multi-view strategy, progressively refining the model's focus from broad contextual features to finer, lesion-specific details.
arXiv Detail & Related papers (2025-02-21T16:35:30Z) - Pathologist-like explainable AI for interpretable Gleason grading in prostate cancer [3.7226270582597656]
We introduce a novel dataset of 1,015 tissue microarray core images, annotated by an international group of 54 pathologists.
The annotations provide detailed localized pattern descriptions for Gleason grading in line with international guidelines.
We develop an inherently explainable AI system based on a U-Net architecture that provides predictions leveraging pathologists' terminology.
arXiv Detail & Related papers (2024-10-19T06:58:26Z) - Anatomy-guided Pathology Segmentation [56.883822515800205]
We develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features.
Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy.
In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods.
arXiv Detail & Related papers (2024-07-08T11:44:15Z) - Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification [0.0]
Lung and colon cancer are serious worldwide health challenges that require early and precise identification to reduce mortality risks.
Histopathology remains the gold standard, although time-consuming and vulnerable to inter-observer mistakes.
Recent advances in deep learning have generated interest in its application to medical imaging analysis.
arXiv Detail & Related papers (2024-05-07T18:49:34Z) - Knowledge-enhanced Visual-Language Pretraining for Computational Pathology [68.6831438330526]
We consider the problem of visual representation learning for computational pathology, by exploiting large-scale image-text pairs gathered from public resources.
We curate a pathology knowledge tree that consists of 50,470 informative attributes for 4,718 diseases requiring pathology diagnosis from 32 human tissues.
arXiv Detail & Related papers (2024-04-15T17:11:25Z) - Data and Knowledge Co-driving for Cancer Subtype Classification on
Multi-Scale Histopathological Slides [4.22412600279685]
We propose a Data and Knowledge Co-driving (D&K) model to replicate the process of cancer subtype classification on a histological slide like a pathologist.
Specifically, in the data-driven module, the bagging mechanism in ensemble learning is leveraged to integrate the histological features from various bags extracted by the embedding representation unit.
arXiv Detail & Related papers (2023-04-18T21:57:37Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Visual attention analysis of pathologists examining whole slide images
of Prostate cancer [29.609319636136426]
We study the attention of pathologists as they examine whole-slide images (WSIs) of prostate cancer tissue using a digital microscope.
We collected slide navigation data from 13 pathologists in 2 groups (5 genitourinary (GU) specialists and 8 general pathologists) and generated visual attention heatmaps and scanpaths.
To quantify the relationship between a pathologist's attention and evidence for cancer in the WSI, we obtained tumor annotations from a genitourinary specialist.
arXiv Detail & Related papers (2022-02-17T04:01:43Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Deeply supervised UNet for semantic segmentation to assist
dermatopathological assessment of Basal Cell Carcinoma (BCC) [2.031570465477242]
We focus on detecting Basal Cell Carcinoma (BCC) through semantic segmentation using several models based on the UNet architecture.
We analyze two different encoders for the first part of the UNet network and two additional training strategies.
The best model achieves over 96%, accuracy, sensitivity, and specificity on the test set.
arXiv Detail & Related papers (2021-03-05T15:39:55Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.