From axioms over graphs to vectors, and back again: evaluating the
properties of graph-based ontology embeddings
- URL: http://arxiv.org/abs/2303.16519v2
- Date: Thu, 11 May 2023 09:24:24 GMT
- Title: From axioms over graphs to vectors, and back again: evaluating the
properties of graph-based ontology embeddings
- Authors: Fernando Zhapa-Camacho, Robert Hoehndorf
- Abstract summary: One approach to generating embeddings is by introducing a set of nodes and edges for named entities and logical axioms structure.
Methods that embed in graphs (graph projections) have different properties related to the type of axioms they utilize.
- Score: 78.217418197549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several approaches have been developed that generate embeddings for
Description Logic ontologies and use these embeddings in machine learning. One
approach of generating ontologies embeddings is by first embedding the
ontologies into a graph structure, i.e., introducing a set of nodes and edges
for named entities and logical axioms, and then applying a graph embedding to
embed the graph in $\mathbb{R}^n$. Methods that embed ontologies in graphs
(graph projections) have different formal properties related to the type of
axioms they can utilize, whether the projections are invertible or not, and
whether they can be applied to asserted axioms or their deductive closure. We
analyze, qualitatively and quantitatively, several graph projection methods
that have been used to embed ontologies, and we demonstrate the effect of the
properties of graph projections on the performance of predicting axioms from
ontology embeddings. We find that there are substantial differences between
different projection methods, and both the projection of axioms into nodes and
edges as well ontological choices in representing knowledge will impact the
success of using ontology embeddings to predict axioms.
Related papers
- Structural Node Embeddings with Homomorphism Counts [2.0131144893314232]
homomorphism counts capture local structural information.
We experimentally show the effectiveness of homomorphism count based node embeddings.
Our approach capitalises on the efficient computability of graph homomorphism counts for bounded treewidth graph classes.
arXiv Detail & Related papers (2023-08-29T13:14:53Z) - Weisfeiler and Lehman Go Paths: Learning Topological Features via Path Complexes [4.23480641508611]
Graph Neural Networks (GNNs) are theoretically bounded by the 1-Weisfeiler-Lehman test.
Our study presents a novel perspective by focusing on simple paths within graphs during the topological message-passing process.
arXiv Detail & Related papers (2023-08-13T19:45:20Z) - On the Expressivity of Persistent Homology in Graph Learning [13.608942872770855]
Persistent homology, a technique from computational topology, has recently shown strong empirical performance in the context of graph classification.
This paper provides a brief introduction to persistent homology in the context of graphs, as well as a theoretical discussion and empirical analysis of its expressivity for graph learning tasks.
arXiv Detail & Related papers (2023-02-20T08:19:19Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Directed Graph Embeddings in Pseudo-Riemannian Manifolds [0.0]
We show that general directed graphs can be effectively represented by an embedding model that combines three components.
We demonstrate the representational capabilities of this method by applying it to the task of link prediction.
arXiv Detail & Related papers (2021-06-16T10:31:37Z) - Persistent Homology and Graphs Representation Learning [0.7734726150561088]
We study the topological invariant properties encoded in node graph representational embeddings by utilizing tools available in persistent homology.
Our construction effectively defines a unique persistence-based graph descriptor, on both the graph and node levels.
To demonstrate the effectiveness of the proposed method, we study the topological descriptors induced by DeepWalk, Node2Vec and Diff2Vec.
arXiv Detail & Related papers (2021-02-25T15:26:21Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.