Asymptotics of $\ell_2$ Regularized Network Embeddings
- URL: http://arxiv.org/abs/2201.01689v1
- Date: Wed, 5 Jan 2022 16:33:14 GMT
- Title: Asymptotics of $\ell_2$ Regularized Network Embeddings
- Authors: Andrew Davison
- Abstract summary: A common approach to solving tasks, such as node classification or link prediction, begins by learning a Euclidean embedding of the nodes of the network.
For unsupervised random walk methods such as DeepWalk and node2vec, adding a $ell$ penalty on the embedding vectors to the loss leads to improved downstream task performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common approach to solving tasks, such as node classification or link
prediction, on a large network begins by learning a Euclidean embedding of the
nodes of the network, from which regular machine learning methods can be
applied. For unsupervised random walk methods such as DeepWalk and node2vec,
adding a $\ell_2$ penalty on the embedding vectors to the loss leads to
improved downstream task performance. In this paper we study the effects of
this regularization and prove that, under exchangeability assumptions on the
graph, it asymptotically leads to learning a nuclear-norm-type penalized
graphon. In particular, the exact form of the penalty depends on the choice of
subsampling method used within stochastic gradient descent to learn the
embeddings. We also illustrate empirically that concatenating node covariates
to $\ell_2$ regularized node2vec embeddings leads to comparable, if not
superior, performance to methods which incorporate node covariates and the
network structure in a non-linear manner.
Related papers
- Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - NODDLE: Node2vec based deep learning model for link prediction [0.0]
We propose NODDLE (integration of NOde2vec anD Deep Learning mEthod), a deep learning model which incorporates the features extracted by node2vec and feeds them into a hidden neural network.
Experimental results show that this method yields better results than the traditional methods on various social network datasets.
arXiv Detail & Related papers (2023-05-25T18:43:52Z) - A Systematic Evaluation of Node Embedding Robustness [77.29026280120277]
We assess the empirical robustness of node embedding models to random and adversarial poisoning attacks.
We compare edge addition, deletion and rewiring strategies computed using network properties as well as node labels.
We found that node classification suffers from higher performance degradation as opposed to network reconstruction.
arXiv Detail & Related papers (2022-09-16T17:20:23Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - On the Power of Gradual Network Alignment Using Dual-Perception
Similarities [14.779474659172923]
Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes.
Our study is motivated by the fact that, since most of existing NA methods have attempted to discover all node pairs at once, they do not harness information enriched through interim discovery of node correspondences.
We propose Grad-Align, a new NA method that gradually discovers node pairs by making full use of node pairs exhibiting strong consistency.
arXiv Detail & Related papers (2022-01-26T14:01:32Z) - Asymptotics of Network Embeddings Learned via Subsampling [4.23373349945751]
We study representation methods using a subsampling approach, such as node2vec, into a single unifying framework.
This provides a theoretical foundation to understand what the embedding vectors represent and how well these methods perform on downstream tasks.
Notably, we observe that typically used loss functions may lead to shortcomings, such as a lack of Fisher consistency.
arXiv Detail & Related papers (2021-07-06T02:54:53Z) - Consistency of random-walk based network embedding algorithms [13.214230533788932]
We study the node2vec and DeepWalk algorithms through the perspective of matrix factorization.
Our results indicate a subtle interplay between the sparsity of the observed networks, the window sizes of the random walks, and the convergence rates of the node2vec/DeepWalk embedding.
arXiv Detail & Related papers (2021-01-18T22:49:22Z) - Anisotropic Graph Convolutional Network for Semi-supervised Learning [7.843067454030999]
Graph convolutional networks learn effective node embeddings that have proven to be useful in achieving high-accuracy prediction results.
These networks suffer from the issue of over-smoothing and shrinking effect of the graph due in large part to the fact that they diffuse features across the edges of the graph using a linear Laplacian flow.
We propose an anisotropic graph convolutional network for semi-supervised node classification by introducing a nonlinear function that captures informative features from nodes, while preventing oversmoothing.
arXiv Detail & Related papers (2020-10-20T13:56:03Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.