Knowledge-augmented Graph Neural Networks with Concept-aware Attention for Adverse Drug Event Detection
- URL: http://arxiv.org/abs/2301.10451v3
- Date: Sat, 18 May 2024 18:26:32 GMT
- Title: Knowledge-augmented Graph Neural Networks with Concept-aware Attention for Adverse Drug Event Detection
- Authors: Shaoxiong Ji, Ya Gao, Pekka Marttinen,
- Abstract summary: Adverse drug events (ADEs) are an important aspect of drug safety.
Various texts contain a wealth of information about ADEs.
Recent studies have applied word embedding and deep learning -based natural language processing to automate ADE detection from text.
We propose a concept-aware attention mechanism which learns features differently for the different types of nodes in the graph.
- Score: 9.334701229573739
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adverse drug events (ADEs) are an important aspect of drug safety. Various texts such as biomedical literature, drug reviews, and user posts on social media and medical forums contain a wealth of information about ADEs. Recent studies have applied word embedding and deep learning -based natural language processing to automate ADE detection from text. However, they did not explore incorporating explicit medical knowledge about drugs and adverse reactions or the corresponding feature learning. This paper adopts the heterogenous text graph which describes relationships between documents, words and concepts, augments it with medical knowledge from the Unified Medical Language System, and proposes a concept-aware attention mechanism which learns features differently for the different types of nodes in the graph. We further utilize contextualized embeddings from pretrained language models and convolutional graph neural networks for effective feature representation and relational learning. Experiments on four public datasets show that our model achieves performance competitive to the recent advances and the concept-aware attention consistently outperforms other attention mechanisms.
Related papers
- Epidemiology-informed Network for Robust Rumor Detection [59.89351792706995]
We propose a novel Epidemiology-informed Network (EIN) that integrates epidemiological knowledge to enhance performance.
To adapt epidemiology theory to rumor detection, it is expected that each users stance toward the source information will be annotated.
Our experimental results demonstrate that the proposed EIN not only outperforms state-of-the-art methods on real-world datasets but also exhibits enhanced robustness across varying tree depths.
arXiv Detail & Related papers (2024-11-20T00:43:32Z) - A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis [48.84443450990355]
Deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexcepted situations.
We investigate this challenge and focus on model sensitivity to domain shifts, such as data sampled from different hospitals or data confounded by demographic variables such as sex, race, etc, in the context of chest X-rays and skin lesion images.
Taking inspiration from medical training, we propose giving deep networks a prior grounded in explicit medical knowledge communicated in natural language.
arXiv Detail & Related papers (2024-05-23T17:55:02Z) - Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Representing visual classification as a linear combination of words [0.0]
We present an explainability strategy that uses a vision-language model to identify language-based descriptors of a visual classification task.
By leveraging a pre-trained joint embedding space between images and text, our approach estimates a new classification task as a linear combination of words.
We find that the resulting descriptors largely align with clinical knowledge despite a lack of domain-specific language training.
arXiv Detail & Related papers (2023-11-18T02:00:20Z) - Descriptive Knowledge Graph in Biomedical Domain [26.91431888505873]
We present a novel system that automatically extracts and generates informative and descriptive sentences from the biomedical corpus.
Unlike previous search engines or exploration systems that retrieve unconnected passages, our system organizes descriptive sentences as a graph.
We spotlight the application of our system in COVID-19 research, illustrating its utility in areas such as drug repurposing and literature curation.
arXiv Detail & Related papers (2023-10-18T03:10:25Z) - Improving Medical Dialogue Generation with Abstract Meaning
Representations [26.97253577302195]
Medical Dialogue Generation serves a critical role in telemedicine by facilitating the dissemination of medical expertise to patients.
Existing studies focus on incorporating textual representations, which have limited their ability to represent the semantics of text.
We introduce the use of Abstract Meaning Representations (AMR) to construct graphical representations that delineate the roles of language constituents and medical entities.
arXiv Detail & Related papers (2023-09-19T13:31:49Z) - Align, Reason and Learn: Enhancing Medical Vision-and-Language
Pre-training with Knowledge [68.90835997085557]
We propose a systematic and effective approach to enhance structured medical knowledge from three perspectives.
First, we align the representations of the vision encoder and the language encoder through knowledge.
Second, we inject knowledge into the multi-modal fusion model to enable the model to perform reasoning using knowledge as the supplementation of the input image and text.
Third, we guide the model to put emphasis on the most critical information in images and texts by designing knowledge-induced pretext tasks.
arXiv Detail & Related papers (2022-09-15T08:00:01Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Representation Learning for Networks in Biology and Medicine:
Advancements, Challenges, and Opportunities [18.434430658837258]
We have witnessed a rapid expansion of representation learning techniques into modeling, analysis, and learning with networks.
In this review, we put forward an observation that long-standing principles of network biology and medicine can provide the conceptual grounding for representation learning.
We synthesize a spectrum of algorithmic approaches that leverage topological features to embed networks into compact vector spaces.
arXiv Detail & Related papers (2021-04-11T00:20:00Z) - Benchmark and Best Practices for Biomedical Knowledge Graph Embeddings [8.835844347471626]
We train several state-of-the-art knowledge graph embedding models on the SNOMED-CT knowledge graph.
We make a case for the importance of leveraging the multi-relational nature of knowledge graphs for learning biomedical knowledge representation.
arXiv Detail & Related papers (2020-06-24T14:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.