Medical Knowledge-enriched Textual Entailment Framework
- URL: http://arxiv.org/abs/2011.05257v1
- Date: Tue, 10 Nov 2020 17:25:27 GMT
- Title: Medical Knowledge-enriched Textual Entailment Framework
- Authors: Shweta Yadav, Vishal Pallagani, Amit Sheth
- Abstract summary: We present a novel Medical Knowledge-Enriched Textual Entailment framework.
We evaluate our framework on the benchmark MEDIQA-RQE dataset and manifest that the use of knowledge enriched dual-encoding mechanism help in achieving an absolute improvement of 8.27% over SOTA language models.
- Score: 5.493804101940195
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: One of the cardinal tasks in achieving robust medical question answering
systems is textual entailment. The existing approaches make use of an ensemble
of pre-trained language models or data augmentation, often to clock higher
numbers on the validation metrics. However, two major shortcomings impede
higher success in identifying entailment: (1) understanding the focus/intent of
the question and (2) ability to utilize the real-world background knowledge to
capture the context beyond the sentence. In this paper, we present a novel
Medical Knowledge-Enriched Textual Entailment framework that allows the model
to acquire a semantic and global representation of the input medical text with
the help of a relevant domain-specific knowledge graph. We evaluate our
framework on the benchmark MEDIQA-RQE dataset and manifest that the use of
knowledge enriched dual-encoding mechanism help in achieving an absolute
improvement of 8.27% over SOTA language models. We have made the source code
available here.
Related papers
- Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - Enhancing the vision-language foundation model with key semantic knowledge-emphasized report refinement [9.347971487478038]
This paper develops a novel vision-language representation learning framework by proposing a key semantic knowledge-emphasized report refinement method.
Our framework surpasses seven state-of-the-art methods in both fine-tuning and zero-shot settings.
arXiv Detail & Related papers (2024-01-21T07:57:04Z) - Enhancing Biomedical Lay Summarisation with External Knowledge Graphs [28.956500948255677]
We investigate the effectiveness of three different approaches for incorporating knowledge graphs within lay summarisation models.
Our results confirm that integrating graph-based domain knowledge can significantly benefit lay summarisation by substantially increasing the readability of generated text.
arXiv Detail & Related papers (2023-10-24T10:25:21Z) - Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution [48.86322922826514]
This paper defines a new task of Knowledge-aware Language Model Attribution (KaLMA)
First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios.
Second, we propose a new Conscious Incompetence" setting considering the incomplete knowledge repository.
Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment.
arXiv Detail & Related papers (2023-10-09T11:45:59Z) - Align, Reason and Learn: Enhancing Medical Vision-and-Language
Pre-training with Knowledge [68.90835997085557]
We propose a systematic and effective approach to enhance structured medical knowledge from three perspectives.
First, we align the representations of the vision encoder and the language encoder through knowledge.
Second, we inject knowledge into the multi-modal fusion model to enable the model to perform reasoning using knowledge as the supplementation of the input image and text.
Third, we guide the model to put emphasis on the most critical information in images and texts by designing knowledge-induced pretext tasks.
arXiv Detail & Related papers (2022-09-15T08:00:01Z) - Leveraging Visual Knowledge in Language Tasks: An Empirical Study on
Intermediate Pre-training for Cross-modal Knowledge Transfer [61.34424171458634]
We study whether integrating visual knowledge into a language model can fill the gap.
Our experiments show that visual knowledge transfer can improve performance in both low-resource and fully supervised settings.
arXiv Detail & Related papers (2022-03-14T22:02:40Z) - A Practical Approach towards Causality Mining in Clinical Text using
Active Transfer Learning [2.6125458645126907]
Causality mining is an active research area, which requires the application of state-of-the-art natural language processing techniques.
This research work is to create a framework, which can convert clinical text into causal knowledge.
arXiv Detail & Related papers (2020-12-10T06:51:13Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Learning Contextualized Document Representations for Healthcare Answer
Retrieval [68.02029435111193]
Contextual Discourse Vectors (CDV) is a distributed document representation for efficient answer retrieval from long documents.
Our model leverages a dual encoder architecture with hierarchical LSTM layers and multi-task training to encode the position of clinical entities and aspects alongside the document discourse.
We show that our generalized model significantly outperforms several state-of-the-art baselines for healthcare passage ranking.
arXiv Detail & Related papers (2020-02-03T15:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.