Node Embedding for Homophilous Graphs with ARGEW: Augmentation of Random
walks by Graph Edge Weights
- URL: http://arxiv.org/abs/2308.05957v1
- Date: Fri, 11 Aug 2023 06:19:23 GMT
- Title: Node Embedding for Homophilous Graphs with ARGEW: Augmentation of Random
walks by Graph Edge Weights
- Authors: Jun Hee Kim, Jaeman Son, Hyunsoo Kim, Eunjo Lee
- Abstract summary: ARGEW is a novel augmentation method for random walks that expands the corpus in such a way that nodes with larger edge weights end up with closer embeddings.
With several real-world networks, we demonstrate that with ARGEW, compared to not using it, the desired pattern that node pairs with larger edge weights have closer embeddings is much clearer.
- Score: 2.2935273605606494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representing nodes in a network as dense vectors node embeddings is important
for understanding a given network and solving many downstream tasks. In
particular, for weighted homophilous graphs where similar nodes are connected
with larger edge weights, we desire node embeddings where node pairs with
strong weights have closer embeddings. Although random walk based node
embedding methods like node2vec and node2vec+ do work for weighted networks via
including edge weights in the walk transition probabilities, our experiments
show that the embedding result does not adequately reflect edge weights. In
this paper, we propose ARGEW (Augmentation of Random walks by Graph Edge
Weights), a novel augmentation method for random walks that expands the corpus
in such a way that nodes with larger edge weights end up with closer
embeddings. ARGEW can work with any random walk based node embedding method,
because it is independent of the random sampling strategy itself and works on
top of the already-performed walks. With several real-world networks, we
demonstrate that with ARGEW, compared to not using it, the desired pattern that
node pairs with larger edge weights have closer embeddings is much clearer. We
also examine ARGEW's performance in node classification: node2vec with ARGEW
outperforms pure node2vec and is not sensitive to hyperparameters (i.e.
consistently good). In fact, it achieves similarly good results as supervised
GCN, even without any node feature or label information during training.
Finally, we explain why ARGEW works consistently well by exploring the
coappearance distributions using a synthetic graph with clear structural roles.
Related papers
- Graphlets correct for the topological information missed by random walks [0.0]
We introduce orbit adjacencies that quantify the adjacencies of two nodes as co-occurring on a given pair of graphlet orbits.
We prove that random walks on up to k nodes capture only a subset of all the possible orbit adjacencies for up to k-node graphlets.
We find that orbit adjacencies, which include those unseen by random walks, outperform random walk-based adjacencies.
arXiv Detail & Related papers (2024-05-23T05:42:38Z) - Re-visiting Skip-Gram Negative Sampling: Dimension Regularization for More Efficient Dissimilarity Preservation in Graph Embeddings [8.858596502294471]
We show that node-wise repulsion is, in aggregate, an approximate re-centering of the node embedding dimensions.
We propose an algorithm augmentation framework that speeds up any existing algorithm, supervised or unsupervised.
arXiv Detail & Related papers (2024-04-30T19:43:01Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Cold Brew: Distilling Graph Node Representations with Incomplete or
Missing Neighborhoods [69.13371028670153]
We introduce feature-contribution ratio (FCR) to study the viability of using inductive GNNs to solve the Strict Cold Start (SCS) problem.
We experimentally show FCR disentangles the contributions of various components of graph datasets and demonstrate the superior performance of Cold Brew.
arXiv Detail & Related papers (2021-11-08T21:29:25Z) - RaWaNet: Enriching Graph Neural Network Input via Random Walks on Graphs [0.0]
Graph neural networks (GNNs) have gained increasing popularity and have shown very promising results for data that are represented by graphs.
We propose a random walk data processing of the graphs based on three selected lengths. Namely, (regular) walks of length 1 and 2, and a fractional walk of length $gamma in (0,1)$, in order to capture the different local and global dynamics on the graphs.
We test our method on various molecular datasets by passing the processed node features to the network in order to perform several classification and regression tasks.
arXiv Detail & Related papers (2021-09-15T20:04:01Z) - Position-based Hash Embeddings For Scaling Graph Neural Networks [8.87527266373087]
Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
arXiv Detail & Related papers (2021-08-31T22:42:25Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.