Generating Explanations in Medical Question-Answering by Expectation
Maximization Inference over Evidence
- URL: http://arxiv.org/abs/2310.01299v1
- Date: Mon, 2 Oct 2023 16:00:37 GMT
- Title: Generating Explanations in Medical Question-Answering by Expectation
Maximization Inference over Evidence
- Authors: Wei Sun, Mingxiao Li, Damien Sileo, Jesse Davis, and Marie-Francine
Moens
- Abstract summary: We propose a novel approach for generating natural language explanations for answers predicted by medical QA systems.
Our system extract knowledge from medical textbooks to enhance the quality of explanations during the explanation generation process.
- Score: 33.018873142559286
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical Question Answering~(medical QA) systems play an essential role in
assisting healthcare workers in finding answers to their questions. However, it
is not sufficient to merely provide answers by medical QA systems because users
might want explanations, that is, more analytic statements in natural language
that describe the elements and context that support the answer. To do so, we
propose a novel approach for generating natural language explanations for
answers predicted by medical QA systems. As high-quality medical explanations
require additional medical knowledge, so that our system extract knowledge from
medical textbooks to enhance the quality of explanations during the explanation
generation process. Concretely, we designed an expectation-maximization
approach that makes inferences about the evidence found in these texts,
offering an efficient way to focus attention on lengthy evidence passages.
Experimental results, conducted on two datasets MQAE-diag and MQAE, demonstrate
the effectiveness of our framework for reasoning with textual evidence. Our
approach outperforms state-of-the-art models, achieving a significant
improvement of \textbf{6.86} and \textbf{9.43} percentage points on the Rouge-1
score; \textbf{8.23} and \textbf{7.82} percentage points on the Bleu-4 score on
the respective datasets.
Related papers
- Tri-VQA: Triangular Reasoning Medical Visual Question Answering for Multi-Attribute Analysis [4.964280449393689]
We investigate the construction of a more cohesive and stable Med-VQA structure.
Motivated by causal effect, we propose a novel Triangular Reasoning VQA framework.
arXiv Detail & Related papers (2024-06-21T10:50:55Z) - InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification [60.10193972862099]
This work proposes a framework to characterize and recover simplification-induced information loss in form of question-and-answer pairs.
QA pairs are designed to help readers deepen their knowledge of a text.
arXiv Detail & Related papers (2024-01-29T19:00:01Z) - Explanatory Argument Extraction of Correct Answers in Resident Medical
Exams [5.399800035598185]
We present a new dataset which includes not only explanatory arguments for the correct answer, but also arguments to reason why the incorrect answers are not correct.
This new benchmark allows us to setup a novel extractive task which consists of identifying the explanation of the correct answer written by medical doctors.
arXiv Detail & Related papers (2023-12-01T13:22:35Z) - PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering [56.25766322554655]
Medical Visual Question Answering (MedVQA) presents a significant opportunity to enhance diagnostic accuracy and healthcare delivery.
We propose a generative-based model for medical visual understanding by aligning visual information from a pre-trained vision encoder with a large language model.
We train the proposed model on PMC-VQA and then fine-tune it on multiple public benchmarks, e.g., VQA-RAD, SLAKE, and Image-Clef 2019.
arXiv Detail & Related papers (2023-05-17T17:50:16Z) - Medical Question Summarization with Entity-driven Contrastive Learning [12.008269098530386]
This paper proposes a novel medical question summarization framework using entity-driven contrastive learning (ECL)
ECL employs medical entities in frequently asked questions (FAQs) as focuses and devises an effective mechanism to generate hard negative samples.
We find that some MQA datasets suffer from serious data leakage problems, such as the iCliniq dataset's 33% duplicate rate.
arXiv Detail & Related papers (2023-04-15T00:19:03Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Medical Question Understanding and Answering with Knowledge Grounding
and Semantic Self-Supervision [53.692793122749414]
We introduce a medical question understanding and answering system with knowledge grounding and semantic self-supervision.
Our system is a pipeline that first summarizes a long, medical, user-written question, using a supervised summarization loss.
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
arXiv Detail & Related papers (2022-09-30T08:20:32Z) - Medical Visual Question Answering: A Survey [55.53205317089564]
Medical Visual Question Answering(VQA) is a combination of medical artificial intelligence and popular VQA challenges.
Given a medical image and a clinically relevant question in natural language, the medical VQA system is expected to predict a plausible and convincing answer.
arXiv Detail & Related papers (2021-11-19T05:55:15Z) - Medical Knowledge-enriched Textual Entailment Framework [5.493804101940195]
We present a novel Medical Knowledge-Enriched Textual Entailment framework.
We evaluate our framework on the benchmark MEDIQA-RQE dataset and manifest that the use of knowledge enriched dual-encoding mechanism help in achieving an absolute improvement of 8.27% over SOTA language models.
arXiv Detail & Related papers (2020-11-10T17:25:27Z) - Interpretable Multi-Step Reasoning with Knowledge Extraction on Complex
Healthcare Question Answering [89.76059961309453]
HeadQA dataset contains multiple-choice questions authorized for the public healthcare specialization exam.
These questions are the most challenging for current QA systems.
We present a Multi-step reasoning with Knowledge extraction framework (MurKe)
We are striving to make full use of off-the-shelf pre-trained models.
arXiv Detail & Related papers (2020-08-06T02:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.