Interpretable Diabetic Retinopathy Diagnosis based on Biomarker
Activation Map
- URL: http://arxiv.org/abs/2212.06299v3
- Date: Mon, 26 Jun 2023 23:12:45 GMT
- Title: Interpretable Diabetic Retinopathy Diagnosis based on Biomarker
Activation Map
- Authors: Pengxiao Zang, Tristan T. Hormel, Jie Wang, Yukun Guo, Steven T.
Bailey, Christina J. Flaxel, David Huang, Thomas S. Hwang, and Yali Jia
- Abstract summary: We introduce a novel biomarker activation map (BAM) framework based on generative adversarial learning.
A data set including 456 macular scans were graded as non-referable or referable DR based on current clinical standards.
The generated BAMs highlighted known pathologic features including nonperfusion area and retinal fluid.
- Score: 2.6170980960630037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning classifiers provide the most accurate means of automatically
diagnosing diabetic retinopathy (DR) based on optical coherence tomography
(OCT) and its angiography (OCTA). The power of these models is attributable in
part to the inclusion of hidden layers that provide the complexity required to
achieve a desired task. However, hidden layers also render algorithm outputs
difficult to interpret. Here we introduce a novel biomarker activation map
(BAM) framework based on generative adversarial learning that allows clinicians
to verify and understand classifiers decision-making. A data set including 456
macular scans were graded as non-referable or referable DR based on current
clinical standards. A DR classifier that was used to evaluate our BAM was first
trained based on this data set. The BAM generation framework was designed by
combing two U-shaped generators to provide meaningful interpretability to this
classifier. The main generator was trained to take referable scans as input and
produce an output that would be classified by the classifier as non-referable.
The BAM is then constructed as the difference image between the output and
input of the main generator. To ensure that the BAM only highlights
classifier-utilized biomarkers an assistant generator was trained to do the
opposite, producing scans that would be classified as referable by the
classifier from non-referable scans. The generated BAMs highlighted known
pathologic features including nonperfusion area and retinal fluid. A fully
interpretable classifier based on these highlights could help clinicians better
utilize and verify automated DR diagnosis.
Related papers
- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images [3.5418498524791766]
This research is development of a novel counterfactual inpainting approach (COIN)
COIN flips the predicted classification label from abnormal to normal by using a generative model.
The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia.
arXiv Detail & Related papers (2024-04-19T12:09:49Z) - Weakly Supervised Lesion Detection and Diagnosis for Breast Cancers with
Partially Annotated Ultrasound Images [19.374895481597466]
Two-Stage Detection and Diagnosis Network (TSDDNet) is proposed based on weakly supervised learning to enhance diagnostic accuracy.
The proposed TSDDNet is evaluated on a B-mode ultrasound dataset, and the experimental results show that it achieves the best performance on both lesion detection and diagnosis tasks.
arXiv Detail & Related papers (2023-06-12T09:26:54Z) - Prostate Lesion Detection and Salient Feature Assessment Using
Zone-Based Classifiers [0.0]
Multi-parametric magnetic resonance imaging (mpMRI) has a growing role in detecting prostate cancer lesions.
It is pertinent that medical professionals who interpret these scans reduce the risk of human error by using computer-aided detection systems.
Here we investigate the best machine learning classifier for each prostate zone.
arXiv Detail & Related papers (2022-08-24T13:08:56Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Breaking with Fixed Set Pathology Recognition through Report-Guided
Contrastive Training [23.506879497561712]
We employ a contrastive global-local dual-encoder architecture to learn concepts directly from unstructured medical reports.
We evaluate our approach on the large-scale chest X-Ray datasets MIMIC-CXR, CheXpert, and ChestX-Ray14 for disease classification.
arXiv Detail & Related papers (2022-05-14T21:44:05Z) - Multi-class versus One-class classifier in spontaneous speech analysis
oriented to Alzheimer Disease diagnosis [58.720142291102135]
The aim of our project is to contribute to earlier diagnosis of AD and better estimates of its severity by using automatic analysis performed through new biomarkers extracted from speech signal.
The use of information about outlier and Fractal Dimension features improves the system performance.
arXiv Detail & Related papers (2022-03-21T09:57:20Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - SCRIB: Set-classifier with Class-specific Risk Bounds for Blackbox
Models [48.374678491735665]
We introduce Set-classifier with Class-specific RIsk Bounds (SCRIB) to tackle this problem.
SCRIB constructs a set-classifier that controls the class-specific prediction risks with a theoretical guarantee.
We validated SCRIB on several medical applications, including sleep staging on electroencephalogram (EEG) data, X-ray COVID image classification, and atrial fibrillation detection based on electrocardiogram (ECG) data.
arXiv Detail & Related papers (2021-03-05T21:06:12Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.