ContrastDiagnosis: Enhancing Interpretability in Lung Nodule Diagnosis
Using Contrastive Learning
- URL: http://arxiv.org/abs/2403.05280v1
- Date: Fri, 8 Mar 2024 13:00:52 GMT
- Title: ContrastDiagnosis: Enhancing Interpretability in Lung Nodule Diagnosis
Using Contrastive Learning
- Authors: Chenglong Wang, Yinqiao Yi, Yida Wang, Chengxiu Zhang, Yun Liu,
Kensaku Mori, Mei Yuan, Guang Yang
- Abstract summary: Clinicians' distrust of black box models has hindered the clinical deployment of AI products.
We propose ContrastDiagnosis, a straightforward yet effective interpretable diagnosis framework.
High diagnostic accuracy was achieved with AUC of 0.977 while maintain a high transparency and explainability.
- Score: 23.541034347602935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the ongoing development of deep learning, an increasing number of AI
models have surpassed the performance levels of human clinical practitioners.
However, the prevalence of AI diagnostic products in actual clinical practice
remains significantly lower than desired. One crucial reason for this gap is
the so-called `black box' nature of AI models. Clinicians' distrust of black
box models has directly hindered the clinical deployment of AI products. To
address this challenge, we propose ContrastDiagnosis, a straightforward yet
effective interpretable diagnosis framework. This framework is designed to
introduce inherent transparency and provide extensive post-hoc explainability
for deep learning model, making them more suitable for clinical medical
diagnosis. ContrastDiagnosis incorporates a contrastive learning mechanism to
provide a case-based reasoning diagnostic rationale, enhancing the model's
transparency and also offers post-hoc interpretability by highlighting similar
areas. High diagnostic accuracy was achieved with AUC of 0.977 while maintain a
high transparency and explainability.
Related papers
- MedGrad E-CLIP: Enhancing Trust and Transparency in AI-Driven Skin Lesion Diagnosis [2.9540164442363976]
This study leverages the CLIP (Contrastive Language-Image Pretraining) model, trained on different skin lesion datasets, to capture meaningful relationships between visual features and diagnostic criteria terms.
We propose a method called MedGrad E-CLIP, which builds on gradient-based E-CLIP by incorporating a weighted entropy mechanism designed for complex medical imaging like skin lesions.
By visually explaining how different features in an image relates to diagnostic criteria, this approach demonstrates the potential of advanced vision-language models in medical image analysis.
arXiv Detail & Related papers (2025-01-12T17:50:47Z) - Efficient and Comprehensive Feature Extraction in Large Vision-Language Model for Clinical Pathology Analysis [34.199766079609795]
Pathological diagnosis is vital for determining disease characteristics, guiding treatment, and assessing prognosis.
Traditional pure vision models face challenges of redundant feature extraction.
Existing large vision-language models (LVLMs) are limited by input resolution constraints, hindering their efficiency and accuracy.
We propose two innovative strategies: the mixed task-guided feature enhancement, and the prompt-guided detail feature completion.
arXiv Detail & Related papers (2024-12-12T18:07:23Z) - Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - SkinGEN: an Explainable Dermatology Diagnosis-to-Generation Framework with Interactive Vision-Language Models [54.32264601568605]
SkinGEN is a diagnosis-to-generation framework that generates reference demonstrations from diagnosis results provided by VLM.
We conduct a user study with 32 participants evaluating both the system performance and explainability.
Results demonstrate that SkinGEN significantly improves users' comprehension of VLM predictions and fosters increased trust in the diagnostic process.
arXiv Detail & Related papers (2024-04-23T05:36:33Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.