Gradient-Based Spectral Embeddings of Random Dot Product Graphs
- URL: http://arxiv.org/abs/2307.13818v2
- Date: Fri, 8 Dec 2023 18:50:36 GMT
- Title: Gradient-Based Spectral Embeddings of Random Dot Product Graphs
- Authors: Marcelo Fiori, Bernardo Marenco, Federico Larroca, Paola Bermolen,
Gonzalo Mateos
- Abstract summary: In this paper, we show how to better solve the embedding problem of the Random Dot Product Graph (RDPG)
We develop a novel feasible optimization method in the resulting manifold.
Our open-source algorithm implementations are scalable, and unlike the they are robust to missing edge data and can track slowly, latent positions from streaming graphs.
- Score: 7.612218105739107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Random Dot Product Graph (RDPG) is a generative model for relational
data, where nodes are represented via latent vectors in low-dimensional
Euclidean space. RDPGs crucially postulate that edge formation probabilities
are given by the dot product of the corresponding latent positions.
Accordingly, the embedding task of estimating these vectors from an observed
graph is typically posed as a low-rank matrix factorization problem. The
workhorse Adjacency Spectral Embedding (ASE) enjoys solid statistical
properties, but it is formally solving a surrogate problem and can be
computationally intensive. In this paper, we bring to bear recent advances in
non-convex optimization and demonstrate their impact to RDPG inference. We
advocate first-order gradient descent methods to better solve the embedding
problem, and to organically accommodate broader network embedding applications
of practical relevance. Notably, we argue that RDPG embeddings of directed
graphs loose interpretability unless the factor matrices are constrained to
have orthogonal columns. We thus develop a novel feasible optimization method
in the resulting manifold. The effectiveness of the graph representation
learning framework is demonstrated on reproducible experiments with both
synthetic and real network data. Our open-source algorithm implementations are
scalable, and unlike the ASE they are robust to missing edge data and can track
slowly-varying latent positions from streaming graphs.
Related papers
- Convergence Guarantees for the DeepWalk Embedding on Block Models [9.898607871253775]
We show how to use the DeepWalk algorithm on graphs obtained from the Block Model (SBM)
Despite being simplistic, the SBM has proved to be a classic model for analyzing algorithms on large graphs.
arXiv Detail & Related papers (2024-10-26T18:35:11Z) - Exploring Edge Probability Graph Models Beyond Edge Independency: Concepts, Analyses, and Algorithms [26.550266795403022]
Desirable random graph models (RGMs) should generate realistic structures such as high clustering (i.e., high subgraph densities)
A common class of RGMs (e.g. triangle preservesg., ErdHos-R'enyi and Kronecker) outputs edge probabilities.
With edge independency, RGMs theoretically cannot produce high subgraph densities and high output variability simultaneously.
arXiv Detail & Related papers (2024-05-26T23:48:30Z) - Gradformer: Graph Transformer with Exponential Decay [69.50738015412189]
Self-attention mechanism in Graph Transformers (GTs) overlooks the graph's inductive biases, particularly biases related to structure.
This paper presents Gradformer, a method innovatively integrating GT with the intrinsic inductive bias.
Gradformer consistently outperforms the Graph Neural Network and GT baseline models in various graph classification and regression tasks.
arXiv Detail & Related papers (2024-04-24T08:37:13Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Implicit Graph Neural Diffusion Networks: Convergence, Generalization,
and Over-Smoothing [7.984586585987328]
Implicit Graph Neural Networks (GNNs) have achieved significant success in addressing graph learning problems.
We introduce a geometric framework for designing implicit graph diffusion layers based on a parameterized graph Laplacian operator.
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem.
arXiv Detail & Related papers (2023-08-07T05:22:33Z) - OrthoReg: Improving Graph-regularized MLPs via Orthogonality
Regularization [66.30021126251725]
Graph Neural Networks (GNNs) are currently dominating in modeling graphstructure data.
Graph-regularized networks (GR-MLPs) implicitly inject the graph structure information into model weights, while their performance can hardly match that of GNNs in most tasks.
We show that GR-MLPs suffer from dimensional collapse, a phenomenon in which the largest a few eigenvalues dominate the embedding space.
We propose OrthoReg, a novel GR-MLP model to mitigate the dimensional collapse issue.
arXiv Detail & Related papers (2023-01-31T21:20:48Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.