Implications of sparsity and high triangle density for graph
representation learning
- URL: http://arxiv.org/abs/2210.15277v2
- Date: Fri, 21 Apr 2023 15:33:03 GMT
- Title: Implications of sparsity and high triangle density for graph
representation learning
- Authors: Hannah Sansford, Alexander Modell, Nick Whiteley, Patrick
Rubin-Delanchy
- Abstract summary: Recent work has shown that sparse graphs containing many triangles cannot be reproduced using a finite-dimensional representation of the nodes.
Here, we show that such graphs can be reproduced using an infinite-dimensional inner product model, where the node representations lie on a low-dimensional manifold.
- Score: 67.98498239263549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that sparse graphs containing many triangles cannot be
reproduced using a finite-dimensional representation of the nodes, in which
link probabilities are inner products. Here, we show that such graphs can be
reproduced using an infinite-dimensional inner product model, where the node
representations lie on a low-dimensional manifold. Recovering a global
representation of the manifold is impossible in a sparse regime. However, we
can zoom in on local neighbourhoods, where a lower-dimensional representation
is possible. As our constructions allow the points to be uniformly distributed
on the manifold, we find evidence against the common perception that triangles
imply community structure.
Related papers
- Weighted Embeddings for Low-Dimensional Graph Representation [0.13499500088995461]
We propose embedding a graph into a weighted space, which is closely related to hyperbolic geometry but mathematically simpler.
We show that our weighted embeddings heavily outperform state-of-the-art Euclidean embeddings for heterogeneous graphs while using fewer dimensions.
arXiv Detail & Related papers (2024-10-08T13:41:03Z) - Hyperbolic Heterogeneous Graph Attention Networks [3.0165549581582454]
Most previous heterogeneous graph embedding models represent elements in a heterogeneous graph as vector representations in a low-dimensional Euclidean space.
We propose Hyperbolic Heterogeneous Graph Attention Networks (HHGAT) that learn vector representations in hyperbolic spaces with meta-path instances.
We conducted experiments on three real-world heterogeneous graph datasets, demonstrating that HHGAT outperforms state-of-the-art heterogeneous graph embedding models in node classification and clustering tasks.
arXiv Detail & Related papers (2024-04-15T04:45:49Z) - A Scalable Combinatorial Solver for Elastic Geometrically Consistent 3D
Shape Matching [69.14632473279651]
We present a scalable algorithm for globally optimizing over the space of geometrically consistent mappings between 3D shapes.
We propose a novel primal coupled with a Lagrange dual problem that is several orders of magnitudes faster than previous solvers.
arXiv Detail & Related papers (2022-04-27T09:47:47Z) - Geometric Graph Representation Learning via Maximizing Rate Reduction [73.6044873825311]
Learning node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
We propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner.
G2R maps nodes in distinct groups into different subspaces, while each subspace is compact and different subspaces are dispersed.
arXiv Detail & Related papers (2022-02-13T07:46:24Z) - Spectral embedding and the latent geometry of multipartite networks [67.56499794542228]
Many networks are multipartite, meaning their nodes can be divided into partitions and nodes of the same partition are never connected.
This paper demonstrates that the node representations obtained via spectral embedding live near partition-specific low-dimensional subspaces of a higher-dimensional ambient space.
We propose a follow-on step after spectral embedding, to recover node representations in their intrinsic rather than ambient dimension.
arXiv Detail & Related papers (2022-02-08T15:52:03Z) - Heterogeneous manifolds for curvature-aware graph embedding [6.3351090376024155]
Graph embeddings are used in a broad range of Graph ML applications.
The quality of such embeddings crucially depends on whether the geometry of the space matches that of the graph.
arXiv Detail & Related papers (2022-02-02T18:18:35Z) - Directed Graph Embeddings in Pseudo-Riemannian Manifolds [0.0]
We show that general directed graphs can be effectively represented by an embedding model that combines three components.
We demonstrate the representational capabilities of this method by applying it to the task of link prediction.
arXiv Detail & Related papers (2021-06-16T10:31:37Z) - Can entanglement hide behind triangle-free graphs? [0.0]
We show that diagonal zero patterns in suitable matrix representations admit a nice description in terms of triangle-free graphs.
We also develop a recipe to construct a plethora of unique classes of positive partial transpose entangled triangle-free states in arbitrary dimensions.
We link the task of entanglement detection in general states to the well-known graph-theoretic problem of finding triangle-free-induced subgraphs in a given graph.
arXiv Detail & Related papers (2020-10-22T17:29:59Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.