Can Large Language Models Augment a Biomedical Ontology with missing
Concepts and Relations?
- URL: http://arxiv.org/abs/2311.06858v1
- Date: Sun, 12 Nov 2023 14:20:55 GMT
- Title: Can Large Language Models Augment a Biomedical Ontology with missing
Concepts and Relations?
- Authors: Antonio Zaitoun, Tomer Sagi, Szymon Wilk, Mor Peleg
- Abstract summary: We propose a method that uses semantic interactions with an LLM to analyze clinical practice guidelines.
Our initial experimentation with the prompts yielded promising results given a manually generated gold standard.
- Score: 1.1060425537315088
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ontologies play a crucial role in organizing and representing knowledge.
However, even current ontologies do not encompass all relevant concepts and
relationships. Here, we explore the potential of large language models (LLM) to
expand an existing ontology in a semi-automated fashion. We demonstrate our
approach on the biomedical ontology SNOMED-CT utilizing semantic relation types
from the widely used UMLS semantic network. We propose a method that uses
conversational interactions with an LLM to analyze clinical practice guidelines
(CPGs) and detect the relationships among the new medical concepts that are not
present in SNOMED-CT. Our initial experimentation with the conversational
prompts yielded promising preliminary results given a manually generated gold
standard, directing our future potential improvements.
Related papers
- Unified Representation of Genomic and Biomedical Concepts through Multi-Task, Multi-Source Contrastive Learning [45.6771125432388]
We introduce GENomic REpresentation with Language Model (GENEREL)
GENEREL is a framework designed to bridge genetic and biomedical knowledge bases.
Our experiments demonstrate GENEREL's ability to effectively capture the nuanced relationships between SNPs and clinical concepts.
arXiv Detail & Related papers (2024-10-14T04:19:52Z) - Document-level Clinical Entity and Relation Extraction via Knowledge Base-Guided Generation [0.869967783513041]
We leverage the Unified Medical Language System (UMLS) knowledge base to accurately identify medical concepts.
Our framework selects UMLS concepts relevant to the text and combines them with prompts to guide language models in extracting entities.
arXiv Detail & Related papers (2024-07-13T22:45:46Z) - Towards Ontology-Enhanced Representation Learning for Large Language Models [0.18416014644193066]
We propose a novel approach to improve an embedding-Large Language Model (embedding-LLM) of interest by infusing knowledge by a reference ontology.
The linguistic information (i.e. concept synonyms and descriptions) and structural information (i.e. is-a relations) are utilized to compile a comprehensive set of concept definitions.
These concept definitions are then employed to fine-tune the target embedding-LLM using a contrastive learning framework.
arXiv Detail & Related papers (2024-05-30T23:01:10Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Diversifying Knowledge Enhancement of Biomedical Language Models using
Adapter Modules and Knowledge Graphs [54.223394825528665]
We develop an approach that uses lightweight adapter modules to inject structured biomedical knowledge into pre-trained language models.
We use two large KGs, the biomedical knowledge system UMLS and the novel biochemical OntoChem, with two prominent biomedical PLMs, PubMedBERT and BioLinkBERT.
We show that our methodology leads to performance improvements in several instances while keeping requirements in computing power low.
arXiv Detail & Related papers (2023-12-21T14:26:57Z) - Exploring the In-context Learning Ability of Large Language Model for
Biomedical Concept Linking [4.8882241537236455]
This research investigates a method that exploits the in-context learning capabilities of large models for biomedical concept linking.
The proposed approach adopts a two-stage retrieve-and-rank framework.
It achieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7% in chemical entity normalization.
arXiv Detail & Related papers (2023-07-03T16:19:50Z) - SynerGPT: In-Context Learning for Personalized Drug Synergy Prediction
and Drug Design [64.69434941796904]
We propose a novel setting and models for in-context drug synergy learning.
We are given a small "personalized dataset" of 10-20 drug synergy relationships in the context of specific cancer cell targets.
Our goal is to predict additional drug synergy relationships in that context.
arXiv Detail & Related papers (2023-06-19T17:03:46Z) - Biologically-informed deep learning models for cancer: fundamental
trends for encoding and interpreting oncology data [0.0]
We provide a structured literature analysis focused on Deep Learning (DL) models used to support inference in cancer biology.
The work focuses on how existing models address the need for better dialogue with prior knowledge, biological plausibility and interpretability.
arXiv Detail & Related papers (2022-07-02T12:11:35Z) - Semantic Search for Large Scale Clinical Ontologies [63.71950996116403]
We present a deep learning approach to build a search system for large clinical vocabularies.
We propose a Triplet-BERT model and a method that generates training data based on semantic training data.
The model is evaluated using five real benchmark data sets and the results show that our approach achieves high results on both free text to concept and concept to searching concept vocabularies.
arXiv Detail & Related papers (2022-01-01T05:15:42Z) - UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual
Embeddings Using the Unified Medical Language System Metathesaurus [73.86656026386038]
We introduce UmlsBERT, a contextual embedding model that integrates domain knowledge during the pre-training process.
By applying these two strategies, UmlsBERT can encode clinical domain knowledge into word embeddings and outperform existing domain-specific models.
arXiv Detail & Related papers (2020-10-20T15:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.