Interpretable Multi-Step Reasoning with Knowledge Extraction on Complex
Healthcare Question Answering
- URL: http://arxiv.org/abs/2008.02434v1
- Date: Thu, 6 Aug 2020 02:47:46 GMT
- Title: Interpretable Multi-Step Reasoning with Knowledge Extraction on Complex
Healthcare Question Answering
- Authors: Ye Liu, Shaika Chowdhury, Chenwei Zhang, Cornelia Caragea, Philip S.
Yu
- Abstract summary: HeadQA dataset contains multiple-choice questions authorized for the public healthcare specialization exam.
These questions are the most challenging for current QA systems.
We present a Multi-step reasoning with Knowledge extraction framework (MurKe)
We are striving to make full use of off-the-shelf pre-trained models.
- Score: 89.76059961309453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Healthcare question answering assistance aims to provide customer healthcare
information, which widely appears in both Web and mobile Internet. The
questions usually require the assistance to have proficient healthcare
background knowledge as well as the reasoning ability on the knowledge.
Recently a challenge involving complex healthcare reasoning, HeadQA dataset,
has been proposed, which contains multiple-choice questions authorized for the
public healthcare specialization exam. Unlike most other QA tasks that focus on
linguistic understanding, HeadQA requires deeper reasoning involving not only
knowledge extraction, but also complex reasoning with healthcare knowledge.
These questions are the most challenging for current QA systems, and the
current performance of the state-of-the-art method is slightly better than a
random guess. In order to solve this challenging task, we present a Multi-step
reasoning with Knowledge extraction framework (MurKe). The proposed framework
first extracts the healthcare knowledge as supporting documents from the large
corpus. In order to find the reasoning chain and choose the correct answer,
MurKe iterates between selecting the supporting documents, reformulating the
query representation using the supporting documents and getting entailment
score for each choice using the entailment model. The reformulation module
leverages selected documents for missing evidence, which maintains
interpretability. Moreover, we are striving to make full use of off-the-shelf
pre-trained models. With less trainable weight, the pre-trained model can
easily adapt to healthcare tasks with limited training samples. From the
experimental results and ablation study, our system is able to outperform
several strong baselines on the HeadQA dataset.
Related papers
- Crafting Interpretable Embeddings by Asking LLMs Questions [89.49960984640363]
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks.
We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM.
We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli.
arXiv Detail & Related papers (2024-05-26T22:30:29Z) - InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification [60.10193972862099]
This work proposes a framework to characterize and recover simplification-induced information loss in form of question-and-answer pairs.
QA pairs are designed to help readers deepen their knowledge of a text.
arXiv Detail & Related papers (2024-01-29T19:00:01Z) - A Joint-Reasoning based Disease Q&A System [6.117758142183177]
Medical question answer (QA) assistants respond to lay users' health-related queries by synthesizing information from multiple sources.
They can serve as vital tools to alleviate issues of misinformation, information overload, and complexity of medical language.
arXiv Detail & Related papers (2024-01-06T09:55:22Z) - Generating Explanations in Medical Question-Answering by Expectation
Maximization Inference over Evidence [33.018873142559286]
We propose a novel approach for generating natural language explanations for answers predicted by medical QA systems.
Our system extract knowledge from medical textbooks to enhance the quality of explanations during the explanation generation process.
arXiv Detail & Related papers (2023-10-02T16:00:37Z) - Reasoning over Hierarchical Question Decomposition Tree for Explainable
Question Answering [83.74210749046551]
We propose to leverage question decomposing for heterogeneous knowledge integration.
We propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT)
Experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly.
arXiv Detail & Related papers (2023-05-24T11:45:59Z) - Large Language Models Need Holistically Thought in Medical
Conversational QA [24.2230289885612]
The Holistically Thought (HoT) method is designed to guide the LLMs to perform the diffused and focused thinking for generating high-quality medical responses.
The proposed HoT method has been evaluated through automated and manual assessments in three different medical CQA datasets.
arXiv Detail & Related papers (2023-05-09T12:57:28Z) - Medical Question Understanding and Answering with Knowledge Grounding
and Semantic Self-Supervision [53.692793122749414]
We introduce a medical question understanding and answering system with knowledge grounding and semantic self-supervision.
Our system is a pipeline that first summarizes a long, medical, user-written question, using a supervised summarization loss.
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
arXiv Detail & Related papers (2022-09-30T08:20:32Z) - Modern Question Answering Datasets and Benchmarks: A Survey [5.026863544662493]
Question Answering (QA) is one of the most important natural language processing (NLP) tasks.
It aims using NLP technologies to generate a corresponding answer to a given question based on the massive unstructured corpus.
In this paper, we investigate influential QA datasets that have been released in the era of deep learning.
arXiv Detail & Related papers (2022-06-30T05:53:56Z) - Hierarchical Deep Multi-modal Network for Medical Visual Question
Answering [25.633660028022195]
We propose a hierarchical deep multi-modal network that analyzes and classifies end-user questions/queries.
We integrate the QS model to the hierarchical deep multi-modal neural network to generate proper answers to the queries related to medical images.
arXiv Detail & Related papers (2020-09-27T07:24:41Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.