Enhancing Biomedical Lay Summarisation with External Knowledge Graphs
- URL: http://arxiv.org/abs/2310.15702v1
- Date: Tue, 24 Oct 2023 10:25:21 GMT
- Title: Enhancing Biomedical Lay Summarisation with External Knowledge Graphs
- Authors: Tomas Goldsack, Zhihao Zhang, Chen Tang, Carolina Scarton, Chenghua
Lin
- Abstract summary: We investigate the effectiveness of three different approaches for incorporating knowledge graphs within lay summarisation models.
Our results confirm that integrating graph-based domain knowledge can significantly benefit lay summarisation by substantially increasing the readability of generated text.
- Score: 28.956500948255677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous approaches for automatic lay summarisation are exclusively reliant
on the source article that, given it is written for a technical audience (e.g.,
researchers), is unlikely to explicitly define all technical concepts or state
all of the background information that is relevant for a lay audience. We
address this issue by augmenting eLife, an existing biomedical lay
summarisation dataset, with article-specific knowledge graphs, each containing
detailed information on relevant biomedical concepts. Using both automatic and
human evaluations, we systematically investigate the effectiveness of three
different approaches for incorporating knowledge graphs within lay
summarisation models, with each method targeting a distinct area of the
encoder-decoder model architecture. Our results confirm that integrating
graph-based domain knowledge can significantly benefit lay summarisation by
substantially increasing the readability of generated text and improving the
explanation of technical concepts.
Related papers
- The Role of Graph Topology in the Performance of Biomedical Knowledge Graph Completion Models [3.1666540219908272]
We conduct a comprehensive investigation into the properties of publicly available biomedical Knowledge Graphs.
We establish links to the accuracy observed in real-world applications.
We release all model predictions and a new suite of analysis tools.
arXiv Detail & Related papers (2024-09-06T08:09:15Z) - A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis [48.84443450990355]
Deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexcepted situations.
We investigate this challenge and focus on model sensitivity to domain shifts, such as data sampled from different hospitals or data confounded by demographic variables such as sex, race, etc, in the context of chest X-rays and skin lesion images.
Taking inspiration from medical training, we propose giving deep networks a prior grounded in explicit medical knowledge communicated in natural language.
arXiv Detail & Related papers (2024-05-23T17:55:02Z) - Knowledge-enhanced Visual-Language Pretraining for Computational Pathology [68.6831438330526]
We consider the problem of visual representation learning for computational pathology, by exploiting large-scale image-text pairs gathered from public resources.
We curate a pathology knowledge tree that consists of 50,470 informative attributes for 4,718 diseases requiring pathology diagnosis from 32 human tissues.
arXiv Detail & Related papers (2024-04-15T17:11:25Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Improving Biomedical Abstractive Summarisation with Knowledge
Aggregation from Citation Papers [24.481854035628434]
Existing language models struggle to generate technical summaries that are on par with those produced by biomedical experts.
We propose a novel attention-based citation aggregation model that integrates domain-specific knowledge from citation papers.
Our model outperforms state-of-the-art approaches and achieves substantial improvements in abstractive biomedical text summarisation.
arXiv Detail & Related papers (2023-10-24T09:56:46Z) - BERT Based Clinical Knowledge Extraction for Biomedical Knowledge Graph
Construction and Analysis [0.4893345190925178]
We propose an end-to-end approach for knowledge extraction and analysis from biomedical clinical notes.
The proposed framework can successfully extract relevant structured information with high accuracy.
arXiv Detail & Related papers (2023-04-21T14:45:33Z) - Align, Reason and Learn: Enhancing Medical Vision-and-Language
Pre-training with Knowledge [68.90835997085557]
We propose a systematic and effective approach to enhance structured medical knowledge from three perspectives.
First, we align the representations of the vision encoder and the language encoder through knowledge.
Second, we inject knowledge into the multi-modal fusion model to enable the model to perform reasoning using knowledge as the supplementation of the input image and text.
Third, we guide the model to put emphasis on the most critical information in images and texts by designing knowledge-induced pretext tasks.
arXiv Detail & Related papers (2022-09-15T08:00:01Z) - Neural Multi-Hop Reasoning With Logical Rules on Biomedical Knowledge
Graphs [10.244651735862627]
We conduct an empirical study based on the real-world task of drug repurposing.
We formulate this task as a link prediction problem where both compounds and diseases correspond to entities in a knowledge graph.
We propose a new method, PoLo, that combines policy-guided walks based on reinforcement learning with logical rules.
arXiv Detail & Related papers (2021-03-18T16:46:11Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Benchmark and Best Practices for Biomedical Knowledge Graph Embeddings [8.835844347471626]
We train several state-of-the-art knowledge graph embedding models on the SNOMED-CT knowledge graph.
We make a case for the importance of leveraging the multi-relational nature of knowledge graphs for learning biomedical knowledge representation.
arXiv Detail & Related papers (2020-06-24T14:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.