Medical Question Understanding and Answering with Knowledge Grounding
and Semantic Self-Supervision
- URL: http://arxiv.org/abs/2209.15301v1
- Date: Fri, 30 Sep 2022 08:20:32 GMT
- Title: Medical Question Understanding and Answering with Knowledge Grounding
and Semantic Self-Supervision
- Authors: Khalil Mrini, Harpreet Singh, Franck Dernoncourt, Seunghyun Yoon,
Trung Bui, Walter Chang, Emilia Farcas, Ndapa Nakashole
- Abstract summary: We introduce a medical question understanding and answering system with knowledge grounding and semantic self-supervision.
Our system is a pipeline that first summarizes a long, medical, user-written question, using a supervised summarization loss.
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
- Score: 53.692793122749414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current medical question answering systems have difficulty processing long,
detailed and informally worded questions submitted by patients, called Consumer
Health Questions (CHQs). To address this issue, we introduce a medical question
understanding and answering system with knowledge grounding and semantic
self-supervision. Our system is a pipeline that first summarizes a long,
medical, user-written question, using a supervised summarization loss. Then,
our system performs a two-step retrieval to return answers. The system first
matches the summarized user question with an FAQ from a trusted medical
knowledge base, and then retrieves a fixed number of relevant sentences from
the corresponding answer document. In the absence of labels for question
matching or answer relevance, we design 3 novel, self-supervised and
semantically-guided losses. We evaluate our model against two strong
retrieval-based question answering baselines. Evaluators ask their own
questions and rate the answers retrieved by our baselines and own system
according to their relevance. They find that our system retrieves more relevant
answers, while achieving speeds 20 times faster. Our self-supervised losses
also help the summarizer achieve higher scores in ROUGE, as well as in human
evaluation metrics. We release our code to encourage further research.
Related papers
- Do RAG Systems Cover What Matters? Evaluating and Optimizing Responses with Sub-Question Coverage [74.70255719194819]
We introduce a novel framework based on sub-question coverage, which measures how well a RAG system addresses different facets of a question.
We use this framework to evaluate three commercial generative answer engines: You.com, Perplexity AI, and Bing Chat.
We find that while all answer engines cover core sub-questions more often than background or follow-up ones, they still miss around 50% of core sub-questions.
arXiv Detail & Related papers (2024-10-20T22:59:34Z) - A Joint-Reasoning based Disease Q&A System [6.117758142183177]
Medical question answer (QA) assistants respond to lay users' health-related queries by synthesizing information from multiple sources.
They can serve as vital tools to alleviate issues of misinformation, information overload, and complexity of medical language.
arXiv Detail & Related papers (2024-01-06T09:55:22Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - Interactive Question Answering Systems: Literature Review [17.033640293433397]
Interactive question answering is a recently proposed and increasingly popular solution that resides at the intersection of question answering and dialogue systems.
By permitting the user to ask more questions, interactive question answering enables users to dynamically interact with the system and receive more precise results.
This survey offers a detailed overview of the interactive question-answering methods that are prevalent in current literature.
arXiv Detail & Related papers (2022-09-04T13:46:54Z) - SPBERTQA: A Two-Stage Question Answering System Based on Sentence
Transformers for Medical Texts [2.5199066832791535]
This paper proposes a two-stage QA system based on Sentence-BERT (SBERT) using multiple negatives ranking (MNR) loss combined with BM25.
With the obtained results, this system achieves better performance than traditional methods.
arXiv Detail & Related papers (2022-06-20T07:07:59Z) - Reinforcement Learning for Abstractive Question Summarization with
Question-aware Semantic Rewards [20.342580435464072]
We introduce a reinforcement learning-based framework for abstractive question summarization.
We propose two novel rewards obtained from the downstream tasks of (i) question-type identification and (ii) question-focus recognition.
These rewards ensure the generation of semantically valid questions and encourage the inclusion of key medical entities/foci in the question summary.
arXiv Detail & Related papers (2021-07-01T02:06:46Z) - Question-aware Transformer Models for Consumer Health Question
Summarization [20.342580435464072]
We develop an abstractive question summarization model that leverages the semantic interpretation of a question via recognition of medical entities.
When evaluated on the MeQSum benchmark corpus, our framework outperformed the state-of-the-art method by 10.2 ROUGE-L points.
arXiv Detail & Related papers (2021-06-01T04:21:31Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Interpretable Multi-Step Reasoning with Knowledge Extraction on Complex
Healthcare Question Answering [89.76059961309453]
HeadQA dataset contains multiple-choice questions authorized for the public healthcare specialization exam.
These questions are the most challenging for current QA systems.
We present a Multi-step reasoning with Knowledge extraction framework (MurKe)
We are striving to make full use of off-the-shelf pre-trained models.
arXiv Detail & Related papers (2020-08-06T02:47:46Z) - Unsupervised Question Decomposition for Question Answering [102.56966847404287]
We propose an algorithm for One-to-N Unsupervised Sequence Sequence (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions.
We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets.
arXiv Detail & Related papers (2020-02-22T19:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.