Directed Graph Embeddings in Pseudo-Riemannian Manifolds
- URL: http://arxiv.org/abs/2106.08678v1
- Date: Wed, 16 Jun 2021 10:31:37 GMT
- Title: Directed Graph Embeddings in Pseudo-Riemannian Manifolds
- Authors: Aaron Sim, Maciej Wiatrak, Angus Brayne, P\'aid\'i Creed, Saee Paliwal
- Abstract summary: We show that general directed graphs can be effectively represented by an embedding model that combines three components.
We demonstrate the representational capabilities of this method by applying it to the task of link prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The inductive biases of graph representation learning algorithms are often
encoded in the background geometry of their embedding space. In this paper, we
show that general directed graphs can be effectively represented by an
embedding model that combines three components: a pseudo-Riemannian metric
structure, a non-trivial global topology, and a unique likelihood function that
explicitly incorporates a preferred direction in embedding space. We
demonstrate the representational capabilities of this method by applying it to
the task of link prediction on a series of synthetic and real directed graphs
from natural language applications and biology. In particular, we show that
low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce
equal or better graph representations than curved Riemannian manifolds of
higher dimensions.
Related papers
- Learning graphs and simplicial complexes from data [26.926502862698168]
We present a novel method to infer the underlying graph topology from available data.
We also identify three-node interactions, referred to in the literature as second-order simplicial complexes (SCs)
Experimental results on synthetic and real-world data showcase a superior performance for our approach compared to existing methods.
arXiv Detail & Related papers (2023-12-16T22:02:20Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - Tight and fast generalization error bound of graph embedding in metric
space [54.279425319381374]
We show that graph embedding in non-Euclidean metric space can outperform that in Euclidean space with much smaller training data than the existing bound has suggested.
Our new upper bound is significantly tighter and faster than the existing one, which can be exponential to $R$ and $O(frac1S)$ at the fastest.
arXiv Detail & Related papers (2023-05-13T17:29:18Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Graph Embeddings via Tensor Products and Approximately Orthonormal Codes [0.0]
We show that our representation falls under the bind-and-sum approach in hyperdimensional computing.
We establish some precise results characterizing the behavior of our method.
We briefly discuss its applications toward a dynamic compressed representation of large sparse graphs.
arXiv Detail & Related papers (2022-08-18T10:56:37Z) - Heterogeneous manifolds for curvature-aware graph embedding [6.3351090376024155]
Graph embeddings are used in a broad range of Graph ML applications.
The quality of such embeddings crucially depends on whether the geometry of the space matches that of the graph.
arXiv Detail & Related papers (2022-02-02T18:18:35Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Computationally Tractable Riemannian Manifolds for Graph Embeddings [10.420394952839242]
We show how to learn and optimize graph embeddings in certain curved Riemannian spaces.
Our results serve as new evidence for the benefits of non-Euclidean embeddings in machine learning pipelines.
arXiv Detail & Related papers (2020-02-20T10:55:47Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.