Representation Learning for Person or Entity-centric Knowledge Graphs:
An Application in Healthcare
- URL: http://arxiv.org/abs/2305.05640v3
- Date: Mon, 9 Oct 2023 11:58:55 GMT
- Title: Representation Learning for Person or Entity-centric Knowledge Graphs:
An Application in Healthcare
- Authors: Christos Theodoropoulos, Natasha Mulligan, Thaddeus Stappenbeck, Joao
Bettencourt-Silva
- Abstract summary: This paper presents an end-to-end representation learning framework to extract entity-centric KGs from structured and unstructured data.
We introduce a star-shaped classifier to represent the multiple facets of a person and use it to guide KG creation.
We highlight that this approach has several potential applications across domains and is open-sourced.
- Score: 0.757843972001219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs) are a popular way to organise information based on
ontologies or schemas and have been used across a variety of scenarios from
search to recommendation. Despite advances in KGs, representing knowledge
remains a non-trivial task across industries and it is especially challenging
in the biomedical and healthcare domains due to complex interdependent
relations between entities, heterogeneity, lack of standardization, and
sparseness of data. KGs are used to discover diagnoses or prioritize genes
relevant to disease, but they often rely on schemas that are not centred around
a node or entity of interest, such as a person. Entity-centric KGs are
relatively unexplored but hold promise in representing important facets
connected to a central node and unlocking downstream tasks beyond graph
traversal and reasoning, such as generating graph embeddings and training graph
neural networks for a wide range of predictive tasks. This paper presents an
end-to-end representation learning framework to extract entity-centric KGs from
structured and unstructured data. We introduce a star-shaped ontology to
represent the multiple facets of a person and use it to guide KG creation.
Compact representations of the graphs are created leveraging graph neural
networks and experiments are conducted using different levels of heterogeneity
or explicitness. A readmission prediction task is used to evaluate the results
of the proposed framework, showing a stable system, robust to missing data,
that outperforms a range of baseline machine learning classifiers. We highlight
that this approach has several potential applications across domains and is
open-sourced. Lastly, we discuss lessons learned, challenges, and next steps
for the adoption of the framework in practice.
Related papers
- Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges [101.83124435649358]
Homophily principle, ie nodes with the same labels or similar attributes are more likely to be connected.
Recent work has identified a non-trivial set of datasets where GNN's performance compared to the NN's is not satisfactory.
arXiv Detail & Related papers (2024-07-12T18:04:32Z) - A Systematic Review of Deep Graph Neural Networks: Challenges,
Classification, Architectures, Applications & Potential Utility in
Bioinformatics [0.0]
Graph neural networks (GNNs) employ message transmission between graph nodes to represent graph dependencies.
GNNs have the potential to be an excellent tool for solving a wide range of biological challenges in bioinformatics research.
arXiv Detail & Related papers (2023-11-03T10:25:47Z) - What can knowledge graph alignment gain with Neuro-Symbolic learning
approaches? [1.8416014644193066]
Knowledge Graphs (KGs) are the backbone of many data-intensive applications.
Current algorithms fail to articulate logical thinking and reasoning with lexical, structural, and semantic data learning.
This paper examines the current state of the art in KGA and explores the potential for neurosymbolic integration.
arXiv Detail & Related papers (2023-10-11T12:03:19Z) - KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.