ContrastDiagnosis: Enhancing Interpretability in Lung Nodule Diagnosis
Using Contrastive Learning
- URL: http://arxiv.org/abs/2403.05280v1
- Date: Fri, 8 Mar 2024 13:00:52 GMT
- Title: ContrastDiagnosis: Enhancing Interpretability in Lung Nodule Diagnosis
Using Contrastive Learning
- Authors: Chenglong Wang, Yinqiao Yi, Yida Wang, Chengxiu Zhang, Yun Liu,
Kensaku Mori, Mei Yuan, Guang Yang
- Abstract summary: Clinicians' distrust of black box models has hindered the clinical deployment of AI products.
We propose ContrastDiagnosis, a straightforward yet effective interpretable diagnosis framework.
High diagnostic accuracy was achieved with AUC of 0.977 while maintain a high transparency and explainability.
- Score: 23.541034347602935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the ongoing development of deep learning, an increasing number of AI
models have surpassed the performance levels of human clinical practitioners.
However, the prevalence of AI diagnostic products in actual clinical practice
remains significantly lower than desired. One crucial reason for this gap is
the so-called `black box' nature of AI models. Clinicians' distrust of black
box models has directly hindered the clinical deployment of AI products. To
address this challenge, we propose ContrastDiagnosis, a straightforward yet
effective interpretable diagnosis framework. This framework is designed to
introduce inherent transparency and provide extensive post-hoc explainability
for deep learning model, making them more suitable for clinical medical
diagnosis. ContrastDiagnosis incorporates a contrastive learning mechanism to
provide a case-based reasoning diagnostic rationale, enhancing the model's
transparency and also offers post-hoc interpretability by highlighting similar
areas. High diagnostic accuracy was achieved with AUC of 0.977 while maintain a
high transparency and explainability.
Related papers
- Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Enabling Collaborative Clinical Diagnosis of Infectious Keratitis by
Integrating Expert Knowledge and Interpretable Data-driven Intelligence [28.144658552047975]
This study investigates the performance, interpretability, and clinical utility of knowledge-guided diagnosis model (KGDM) in the diagnosis of infectious keratitis (IK)
The diagnostic odds ratios (DOR) of the interpreted AI-based biomarkers are effective, ranging from 3.011 to 35.233.
The participants with collaboration achieved a performance exceeding that of both humans and AI.
arXiv Detail & Related papers (2024-01-14T02:10:54Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.