Poisson Ordinal Network for Gleason Group Estimation Using Bi-Parametric MRI
- URL: http://arxiv.org/abs/2407.05796v1
- Date: Mon, 8 Jul 2024 09:56:30 GMT
- Title: Poisson Ordinal Network for Gleason Group Estimation Using Bi-Parametric MRI
- Authors: Yinsong Xu, Yipei Wang, Ziyi Shen, Iani J. M. B. Gayo, Natasha Thorley, Shonit Punwani, Aidong Men, Dean Barratt, Qingchao Chen, Yipeng Hu,
- Abstract summary: Gleason groups serve as the primary histological grading system for prostate cancer.
In clinical practice, pathologists determine the Gleason groups based on specimens obtained from ultrasound-guided biopsies.
We investigate the feasibility of directly estimating the Gleason groups from MRI scans to reduce otherwise required biopsies.
- Score: 15.754944195515504
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Gleason groups serve as the primary histological grading system for prostate cancer, providing crucial insights into the cancer's potential for growth and metastasis. In clinical practice, pathologists determine the Gleason groups based on specimens obtained from ultrasound-guided biopsies. In this study, we investigate the feasibility of directly estimating the Gleason groups from MRI scans to reduce otherwise required biopsies. We identify two characteristics of this task, ordinality and the resulting dependent yet unknown variances between Gleason groups. In addition to the inter- / intra- observer variability in a multi-step Gleason scoring process based on the interpretation of Gleason patterns, our MR-based prediction is also subject to specimen sampling variance and, to a lesser degree, varying MR imaging protocols. To address this challenge, we propose a novel Poisson ordinal network (PON). PONs model the prediction using a Poisson distribution and leverages Poisson encoding and Poisson focal loss to capture a learnable dependency between ordinal classes (here, Gleason groups), rather than relying solely on the numerical ground-truth (e.g. Gleason Groups 1-5 or Gleason Scores 6-10). To improve this modelling efficacy, PONs also employ contrastive learning with a memory bank to regularise intra-class variance, decoupling the memory requirement of contrast learning from the batch size. Experimental results based on the images labelled by saturation biopsies from 265 prior-biopsy-blind patients, across two tasks demonstrate the superiority and effectiveness of our proposed method.
Related papers
- Pathologist-like explainable AI for interpretable Gleason grading in prostate cancer [3.7226270582597656]
We introduce a novel dataset of 1,015 tissue microarray core images, annotated by an international group of 54 pathologists.
The annotations provide detailed localized pattern descriptions for Gleason grading in line with international guidelines.
We develop an inherently explainable AI system based on a U-Net architecture that provides predictions leveraging pathologists' terminology.
arXiv Detail & Related papers (2024-10-19T06:58:26Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - A Pathologist-Informed Workflow for Classification of Prostate Glands in
Histopathology [62.997667081978825]
Pathologists diagnose and grade prostate cancer by examining tissue from needle biopsies on glass slides.
Cancer's severity and risk of metastasis are determined by the Gleason grade, a score based on the organization and morphology of prostate cancer glands.
This paper proposes an automated workflow that follows pathologists' textitmodus operandi, isolating and classifying multi-scale patches of individual glands.
arXiv Detail & Related papers (2022-09-27T14:08:19Z) - Going Deeper through the Gleason Scoring Scale: An Automatic end-to-end
System for Histology Prostate Grading and Cribriform Pattern Detection [7.929433631399375]
The objective of this work is to develop a deep-learning-based system able to support pathologists in the daily analysis of prostate biopsies.
The methodological core of this work is a patch-wise predictive model based on convolutional neural networks able to determine the presence of cancerous patterns.
arXiv Detail & Related papers (2021-05-21T17:51:53Z) - WeGleNet: A Weakly-Supervised Convolutional Neural Network for the
Semantic Segmentation of Gleason Grades in Prostate Histology Images [1.52819437883813]
We propose a deep-learning-based system able to detect local cancerous patterns in the prostate tissue using only the global-level Gleason score during training.
We obtained a Cohen's quadratic kappa (k) of 0.67 for the pixel-level prediction of cancerous patterns in the validation cohort.
We compared the model performance for semantic segmentation of Gleason grades with supervised state-of-the-art architectures in the test cohort.
arXiv Detail & Related papers (2021-05-21T16:27:16Z) - Self-learning for weakly supervised Gleason grading of local patterns [6.97280833203187]
We propose a weakly-supervised deep-learning model, based on self-learning CNNs, to accurately perform both, grading of patch-level patterns and biopsy-level scoring.
We empirically demonstrate that our approach outperforms its supervised counterpart on patch-level Gleason grading by a large margin.
arXiv Detail & Related papers (2021-05-21T15:39:50Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - DONet: Dual Objective Networks for Skin Lesion Segmentation [77.9806410198298]
We propose a simple yet effective framework, named Dual Objective Networks (DONet), to improve the skin lesion segmentation.
Our DONet adopts two symmetric decoders to produce different predictions for approaching different objectives.
To address the challenge of large variety of lesion scales and shapes in dermoscopic images, we additionally propose a recurrent context encoding module (RCEM)
arXiv Detail & Related papers (2020-08-19T06:02:46Z) - Gleason Grading of Histology Prostate Images through Semantic
Segmentation via Residual U-Net [60.145440290349796]
The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists.
Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue.
The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue.
arXiv Detail & Related papers (2020-05-22T19:49:10Z) - Gleason Score Prediction using Deep Learning in Tissue Microarray Image [15.959329921417618]
We used Gleason 2019 Challenge dataset to build a convolutional neural network (CNN) model to segment tissue microarray (TMA) images.
We used a pre-trained model of prostate segmentation to increase the accuracy of the Gleason grade segmentation.
The model achieved a mean Dice of 75.6% on the test cohort and ranked 4th in the Gleason 2019 Challenge with a score of 0.778 combined of Cohen's kappa and the f1-score.
arXiv Detail & Related papers (2020-05-11T07:00:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.