Manifold structure in graph embeddings
- URL: http://arxiv.org/abs/2006.05168v3
- Date: Tue, 5 Jan 2021 11:17:07 GMT
- Title: Manifold structure in graph embeddings
- Authors: Patrick Rubin-Delanchy
- Abstract summary: This paper shows that existing random graph models, including graphon and other latent position models, predict the data should live near a much lower-dimensional set.
One may therefore circumvent the curse of dimensionality by employing methods which exploit hidden manifold structure.
- Score: 6.09170287691728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical analysis of a graph often starts with embedding, the process of
representing its nodes as points in space. How to choose the embedding
dimension is a nuanced decision in practice, but in theory a notion of true
dimension is often available. In spectral embedding, this dimension may be very
high. However, this paper shows that existing random graph models, including
graphon and other latent position models, predict the data should live near a
much lower-dimensional set. One may therefore circumvent the curse of
dimensionality by employing methods which exploit hidden manifold structure.
Related papers
- Weighted Embeddings for Low-Dimensional Graph Representation [0.13499500088995461]
We propose embedding a graph into a weighted space, which is closely related to hyperbolic geometry but mathematically simpler.
We show that our weighted embeddings heavily outperform state-of-the-art Euclidean embeddings for heterogeneous graphs while using fewer dimensions.
arXiv Detail & Related papers (2024-10-08T13:41:03Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Tight and fast generalization error bound of graph embedding in metric
space [54.279425319381374]
We show that graph embedding in non-Euclidean metric space can outperform that in Euclidean space with much smaller training data than the existing bound has suggested.
Our new upper bound is significantly tighter and faster than the existing one, which can be exponential to $R$ and $O(frac1S)$ at the fastest.
arXiv Detail & Related papers (2023-05-13T17:29:18Z) - Implications of sparsity and high triangle density for graph
representation learning [67.98498239263549]
Recent work has shown that sparse graphs containing many triangles cannot be reproduced using a finite-dimensional representation of the nodes.
Here, we show that such graphs can be reproduced using an infinite-dimensional inner product model, where the node representations lie on a low-dimensional manifold.
arXiv Detail & Related papers (2022-10-27T09:15:15Z) - A Brief Survey on Representation Learning based Graph Dimensionality
Reduction Techniques [0.0]
Dimensionality reduction techniques map data represented on higher dimensions onto lower dimensions with varying degrees of information loss.
There exist several techniques that are efficient at generating embeddings from graph data and projecting them onto low dimensional latent spaces.
We present this survey to outline the benefits as well as problems associated with the existing graph dimensionality reduction techniques.
arXiv Detail & Related papers (2022-10-13T04:29:24Z) - Directed Graph Embeddings in Pseudo-Riemannian Manifolds [0.0]
We show that general directed graphs can be effectively represented by an embedding model that combines three components.
We demonstrate the representational capabilities of this method by applying it to the task of link prediction.
arXiv Detail & Related papers (2021-06-16T10:31:37Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - Concise Fuzzy Planar Embedding of Graphs: a Dimensionality Reduction
Approach [0.2867517731896504]
We map a graph representation to a $k$-dimensional space and answer queries of neighboring nodes mainly by measuring Euclidean distances.
The accuracy of our answers would decrease but would be compensated for by fuzzy logic which gives an idea about the likelihood of error.
arXiv Detail & Related papers (2018-03-08T14:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.