Attention-based Aspect Reasoning for Knowledge Base Question Answering
on Clinical Notes
- URL: http://arxiv.org/abs/2108.00513v1
- Date: Sun, 1 Aug 2021 17:58:46 GMT
- Title: Attention-based Aspect Reasoning for Knowledge Base Question Answering
on Clinical Notes
- Authors: Ping Wang, Tian Shi, Khushbu Agarwal, Sutanay Choudhury, Chandan K.
Reddy
- Abstract summary: We aim at creating knowledge base from clinical notes to link different patients and clinical notes, and performing knowledge base question answering (KBQA)
Based on the expert annotations in n2c2, we first created the ClinicalKBQA dataset that includes 8,952 QA pairs and covers questions about seven medical topics through 322 question templates.
We propose an attention-based aspect reasoning (AAR) method for KBQA and investigated the impact of different aspects of answers for prediction.
- Score: 12.831807443341214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question Answering (QA) in clinical notes has gained a lot of attention in
the past few years. Existing machine reading comprehension approaches in
clinical domain can only handle questions about a single block of clinical
texts and fail to retrieve information about different patients and clinical
notes. To handle more complex questions, we aim at creating knowledge base from
clinical notes to link different patients and clinical notes, and performing
knowledge base question answering (KBQA). Based on the expert annotations in
n2c2, we first created the ClinicalKBQA dataset that includes 8,952 QA pairs
and covers questions about seven medical topics through 322 question templates.
Then, we proposed an attention-based aspect reasoning (AAR) method for KBQA and
investigated the impact of different aspects of answers (e.g., entity, type,
path, and context) for prediction. The AAR method achieves better performance
due to the well-designed encoder and attention mechanism. In the experiments,
we find that both aspects, type and path, enable the model to identify answers
satisfying the general conditions and produce lower precision and higher
recall. On the other hand, the aspects, entity and context, limit the answers
by node-specific information and lead to higher precision and lower recall.
Related papers
- HiQuE: Hierarchical Question Embedding Network for Multimodal Depression Detection [11.984035389013426]
HiQuE is a novel depression detection framework that leverages the hierarchical relationship between primary and follow-up questions in clinical interviews.
We conduct extensive experiments on the widely-used clinical interview data, DAIC-WOZ, where our model outperforms other state-of-the-art multimodal depression detection models.
arXiv Detail & Related papers (2024-08-07T09:23:01Z) - K-QA: A Real-World Medical Q&A Benchmark [12.636564634626422]
We construct K-QA, a dataset containing 1,212 patient questions originating from real-world conversations held on K Health.
We employ a panel of in-house physicians to answer and manually decompose a subset of K-QA into self-contained statements.
We evaluate several state-of-the-art models, as well as the effect of in-context learning and medically-oriented augmented retrieval schemes.
arXiv Detail & Related papers (2024-01-25T20:11:04Z) - A Cross Attention Approach to Diagnostic Explainability using Clinical Practice Guidelines for Depression [13.000907040545583]
We develop a method to enhance attention in popular transformer models and generate clinician-understandable explanations for classification.
Inspired by how clinicians rely on their expertise when interacting with patients, we leverage relevant clinical knowledge to model patient inputs.
We develop such a system in the context of Mental Health (MH) using clinical practice guidelines (CPG) for diagnosing depression.
arXiv Detail & Related papers (2023-11-23T08:42:18Z) - Generating Explanations in Medical Question-Answering by Expectation
Maximization Inference over Evidence [33.018873142559286]
We propose a novel approach for generating natural language explanations for answers predicted by medical QA systems.
Our system extract knowledge from medical textbooks to enhance the quality of explanations during the explanation generation process.
arXiv Detail & Related papers (2023-10-02T16:00:37Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Human Evaluation and Correlation with Automatic Metrics in Consultation
Note Generation [56.25869366777579]
In recent years, machine learning models have rapidly become better at generating clinical consultation notes.
We present an extensive human evaluation study where 5 clinicians listen to 57 mock consultations, write their own notes, post-edit a number of automatically generated notes, and extract all the errors.
We find that a simple, character-based Levenshtein distance metric performs on par if not better than common model-based metrics like BertScore.
arXiv Detail & Related papers (2022-04-01T14:04:16Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z) - Where's the Question? A Multi-channel Deep Convolutional Neural Network
for Question Identification in Textual Data [83.89578557287658]
We propose a novel multi-channel deep convolutional neural network architecture, namely Quest-CNN, for the purpose of separating real questions.
We conducted a comprehensive performance comparison analysis of the proposed network against other deep neural networks.
The proposed Quest-CNN achieved the best F1 score both on a dataset of data entry-review dialogue in a dialysis care setting, and on a general domain dataset.
arXiv Detail & Related papers (2020-10-15T15:11:22Z) - Interpretable Multi-Step Reasoning with Knowledge Extraction on Complex
Healthcare Question Answering [89.76059961309453]
HeadQA dataset contains multiple-choice questions authorized for the public healthcare specialization exam.
These questions are the most challenging for current QA systems.
We present a Multi-step reasoning with Knowledge extraction framework (MurKe)
We are striving to make full use of off-the-shelf pre-trained models.
arXiv Detail & Related papers (2020-08-06T02:47:46Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.