Explaining Chest X-ray Pathologies in Natural Language
- URL: http://arxiv.org/abs/2207.04343v1
- Date: Sat, 9 Jul 2022 22:09:37 GMT
- Title: Explaining Chest X-ray Pathologies in Natural Language
- Authors: Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons,
Bartlomiej Papiez, Thomas Lukasiewicz
- Abstract summary: We introduce the task of generating natural language explanations (NLEs) to justify predictions made on medical images.
NLEs are human-friendly and comprehensive, and enable the training of intrinsically explainable models.
We introduce MIMIC-NLE, the first, large-scale, medical imaging dataset with NLEs.
- Score: 46.11255491490225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most deep learning algorithms lack explanations for their predictions, which
limits their deployment in clinical practice. Approaches to improve
explainability, especially in medical imaging, have often been shown to convey
limited information, be overly reassuring, or lack robustness. In this work, we
introduce the task of generating natural language explanations (NLEs) to
justify predictions made on medical images. NLEs are human-friendly and
comprehensive, and enable the training of intrinsically explainable models. To
this goal, we introduce MIMIC-NLE, the first, large-scale, medical imaging
dataset with NLEs. It contains over 38,000 NLEs, which explain the presence of
various thoracic pathologies and chest X-ray findings. We propose a general
approach to solve the task and evaluate several architectures on this dataset,
including via clinician assessment.
Related papers
- Explaining Chest X-ray Pathology Models using Textual Concepts [9.67960010121851]
We propose Conceptual Counterfactual Explanations for Chest X-ray (CoCoX)
We leverage the joint embedding space of an existing vision-language model (VLM) to explain black-box classifier outcomes without the need for annotated datasets.
We demonstrate that the explanations generated by our method are semantically meaningful and faithful to underlying pathologies.
arXiv Detail & Related papers (2024-06-30T01:31:54Z) - Self-supervised vision-langage alignment of deep learning representations for bone X-rays analysis [53.809054774037214]
This paper proposes leveraging vision-language pretraining on bone X-rays paired with French reports.
It is the first study to integrate French reports to shape the embedding space devoted to bone X-Rays representations.
arXiv Detail & Related papers (2024-05-14T19:53:20Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Evaluating Large Language Models for Radiology Natural Language
Processing [68.98847776913381]
The rise of large language models (LLMs) has marked a pivotal shift in the field of natural language processing (NLP)
This study seeks to bridge this gap by critically evaluating thirty two LLMs in interpreting radiology reports.
arXiv Detail & Related papers (2023-07-25T17:57:18Z) - XrayGPT: Chest Radiographs Summarization using Medical Vision-Language
Models [60.437091462613544]
We introduce XrayGPT, a novel conversational medical vision-language model.
It can analyze and answer open-ended questions about chest radiographs.
We generate 217k interactive and high-quality summaries from free-text radiology reports.
arXiv Detail & Related papers (2023-06-13T17:59:59Z) - Knowledge-enhanced Visual-Language Pre-training on Chest Radiology
Images [40.52487429030841]
We propose Knowledge-enhanced Auto Diagnosis (KAD) to guide vision-supervised pre-training using paired chest X-rays and radiology reports.
We evaluate KAD on four external X-ray datasets and demonstrate that its zero-shot performance is superior to that of fully-language models.
arXiv Detail & Related papers (2023-02-27T18:53:10Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Explainable Deep Learning Methods in Medical Image Classification: A
Survey [0.0]
State-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data.
These models are hardly adopted in clinical, mainly due to their lack of interpretability.
The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models.
arXiv Detail & Related papers (2022-05-10T09:28:14Z) - ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis
of Skin Lesions [4.886872847478552]
ExAID (Explainable AI for Dermatology) is a novel framework for biomedical image analysis.
It provides multi-modal concept-based explanations consisting of easy-to-understand textual explanations.
It will be the basis for similar applications in other biomedical imaging fields.
arXiv Detail & Related papers (2022-01-04T17:11:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.