Graph Encoder Embedding
- URL: http://arxiv.org/abs/2109.13098v1
- Date: Mon, 27 Sep 2021 14:49:44 GMT
- Title: Graph Encoder Embedding
- Authors: Cencheng Shen, Qizhe Wang, Carey E. Priebe
- Abstract summary: We propose a lightning fast graph embedding method called graph encoder embedding.
The proposed method has a linear computational complexity and the capacity to process billions of edges within minutes on standard PC.
The speedup is achieved without sacrificing embedding performance.
- Score: 11.980640637972266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a lightning fast graph embedding method called graph
encoder embedding. The proposed method has a linear computational complexity
and the capacity to process billions of edges within minutes on standard PC --
an unattainable feat for any existing graph embedding method. The speedup is
achieved without sacrificing embedding performance: the encoder embedding
performs as good as, and can be viewed as a transformation of the more costly
spectral embedding. The encoder embedding is applicable to either adjacency
matrix or graph Laplacian, and is theoretically sound, i.e., under stochastic
block model or random dot product graph, the graph encoder embedding
asymptotically converges to the block probability or latent positions, and is
approximately normally distributed. We showcase three important applications:
vertex classification, vertex clustering, and graph bootstrap; and the
embedding performance is evaluated via a comprehensive set of synthetic and
real data. In every case, the graph encoder embedding exhibits unrivalled
computational advantages while delivering excellent numerical performance.
Related papers
- Differentiable Proximal Graph Matching [40.41380102260085]
We introduce an algorithm for graph matching based on the proximal operator, referred to as differentiable proximal graph matching (DPGM)
The whole algorithm can be considered as a differentiable map from the graph affinity matrix to the prediction of node correspondence.
Numerical experiments show that PGM outperforms existing graph matching algorithms on diverse datasets.
arXiv Detail & Related papers (2024-05-26T08:17:13Z) - Encoder Embedding for General Graph and Node Classification [4.178980693837599]
We prove that the encoder embedding matrices satisfies the law of large numbers and the central limit theorem on a per-observation basis.
Under certain condition, it achieves normality on a per-class basis, enabling optimal classification through discriminant analysis.
arXiv Detail & Related papers (2024-05-24T11:51:08Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Understanding Coarsening for Embedding Large-Scale Graphs [3.6739949215165164]
Proper analysis of graphs with Machine Learning (ML) algorithms has the potential to yield far-reaching insights into many areas of research and industry.
The irregular structure of graph data constitutes an obstacle for running ML tasks on graphs.
We analyze the impact of the coarsening quality on the embedding performance both in terms of speed and accuracy.
arXiv Detail & Related papers (2020-09-10T15:06:33Z) - Faster Graph Embeddings via Coarsening [25.37181684580123]
Graph embeddings are a ubiquitous tool for machine learning tasks, such as node classification and link prediction, on graph-structured data.
computing the embeddings for large-scale graphs is prohibitively inefficient even if we are interested only in a small subset of relevant vertices.
We present an efficient graph coarsening approach, based on Schur complements, for computing the embedding of the relevant vertices.
arXiv Detail & Related papers (2020-07-06T15:22:25Z) - Auto-decoding Graphs [91.3755431537592]
The generative model is an auto-decoder that learns to synthesize graphs from latent codes.
Graphs are synthesized using self-attention modules that are trained to identify likely connectivity patterns.
arXiv Detail & Related papers (2020-06-04T14:23:01Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.