Pseudo-Euclidean Attract-Repel Embeddings for Undirected Graphs
- URL: http://arxiv.org/abs/2106.09671v2
- Date: Thu, 23 Mar 2023 16:45:19 GMT
- Title: Pseudo-Euclidean Attract-Repel Embeddings for Undirected Graphs
- Authors: Alexander Peysakhovich, Anna Klimovskaia Susmel, Leon Bottou
- Abstract summary: Dot product embeddings take a graph and construct vectors for nodes such that dot products between two vectors give the strength of the edge.
We remove the transitivity assumption by embedding nodes into a pseudo-Euclidean space.
Pseudo-Euclidean embeddings can compress networks efficiently, allow for multiple notions of nearest neighbors each with their own interpretation, and can be slotted' into existing models.
- Score: 73.0261182389643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dot product embeddings take a graph and construct vectors for nodes such that
dot products between two vectors give the strength of the edge. Dot products
make a strong transitivity assumption, however, many important forces
generating graphs in the real world lead to non-transitive relationships. We
remove the transitivity assumption by embedding nodes into a pseudo-Euclidean
space - giving each node an attract and a repel vector. The inner product
between two nodes is defined by taking the dot product in attract vectors and
subtracting the dot product in repel vectors. Pseudo-Euclidean embeddings can
compress networks efficiently, allow for multiple notions of nearest neighbors
each with their own interpretation, and can be `slotted' into existing models
such as exponential family embeddings or graph neural networks for better link
prediction.
Related papers
- Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Promotion/Inhibition Effects in Networks: A Model with Negative
Probabilities [0.0]
Biological networks often encapsulate promotion/inhibition as signed edge-weights of a graph.
We address the inverse problem to determine network edge-weights based on a sign-indefinite adjacency and expression levels at the nodes.
arXiv Detail & Related papers (2023-07-15T08:03:05Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Semisupervised regression in latent structure networks on unknown
manifolds [7.5722195869569]
We consider random dot product graphs, in which an edge is formed between two nodes with probability given by the inner product of their respective latent positions.
We propose a manifold learning and graph embedding technique to predict the response variable on out-of-sample nodes.
arXiv Detail & Related papers (2023-05-04T00:41:04Z) - Spectral embedding and the latent geometry of multipartite networks [67.56499794542228]
Many networks are multipartite, meaning their nodes can be divided into partitions and nodes of the same partition are never connected.
This paper demonstrates that the node representations obtained via spectral embedding live near partition-specific low-dimensional subspaces of a higher-dimensional ambient space.
We propose a follow-on step after spectral embedding, to recover node representations in their intrinsic rather than ambient dimension.
arXiv Detail & Related papers (2022-02-08T15:52:03Z) - An Interpretable Graph Generative Model with Heterophily [38.59200985962146]
We propose the first edge-independent graph generative model that is expressive enough to capture heterophily.
Our experiments demonstrate the effectiveness of our model for a variety of important application tasks.
arXiv Detail & Related papers (2021-11-04T17:34:39Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Directed Graph Representation through Vector Cross Product [2.398608007786179]
Graph embedding methods embed the nodes in a graph in low dimensional vector space while preserving graph topology.
Recent work on directed graphs proposed to preserve the direction of edges among nodes by learning two embeddings, source and target, for every node.
We propose a novel approach that takes advantage of the non commutative property of vector cross product to learn embeddings that inherently preserve the direction of edges among nodes.
arXiv Detail & Related papers (2020-10-21T03:17:44Z) - Factorizable Graph Convolutional Networks [90.59836684458905]
We introduce a novel graph convolutional network (GCN) that explicitly disentangles intertwined relations encoded in a graph.
FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs.
We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets.
arXiv Detail & Related papers (2020-10-12T03:01:40Z) - Adversarial Directed Graph Embedding [43.69472660189029]
We propose a novel Directed Graph embedding framework based on Generative Adversarial Network, called DGGAN.
The main idea is to use adversarial mechanisms to deploy a discriminator and two generators that jointly learn each node's source and target vectors.
Extensive experiments show that DGGAN consistently and significantly outperforms existing state-of-the-art methods across multiple graph mining tasks on directed graphs.
arXiv Detail & Related papers (2020-08-09T06:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.