Learning Graphon Autoencoders for Generative Graph Modeling
- URL: http://arxiv.org/abs/2105.14244v1
- Date: Sat, 29 May 2021 08:11:40 GMT
- Title: Learning Graphon Autoencoders for Generative Graph Modeling
- Authors: Hongteng Xu, Peilin Zhao, Junzhou Huang, Dixin Luo
- Abstract summary: Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
- Score: 91.32624399902755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphon is a nonparametric model that generates graphs with arbitrary sizes
and can be induced from graphs easily. Based on this model, we propose a novel
algorithmic framework called \textit{graphon autoencoder} to build an
interpretable and scalable graph generative model. This framework treats
observed graphs as induced graphons in functional space and derives their
latent representations by an encoder that aggregates Chebshev graphon filters.
A linear graphon factorization model works as a decoder, leveraging the latent
representations to reconstruct the induced graphons (and the corresponding
observed graphs). We develop an efficient learning algorithm to learn the
encoder and the decoder, minimizing the Wasserstein distance between the model
and data distributions. This algorithm takes the KL divergence of the graph
distributions conditioned on different graphons as the underlying distance and
leads to a reward-augmented maximum likelihood estimation. The graphon
autoencoder provides a new paradigm to represent and generate graphs, which has
good generalizability and transferability.
Related papers
- Transformers as Graph-to-Graph Models [13.630495199720423]
We argue that Transformers are essentially graph-to-graph models, with sequences just being a special case.
Our Graph-to-Graph Transformer architecture makes this ability explicit, by inputting graph edges into the attention weight computations and predicting graph edges with attention-like functions.
arXiv Detail & Related papers (2023-10-27T07:21:37Z) - Implicit Graphon Neural Representation [15.175103000392488]
We propose to directly model graphons using neural networks, obtaining Implicit Graphon Neural Representation (IGNR)
Our IGNR has the benefit that it can represent graphons up to arbitrary resolutions, and enables natural and efficient generation of arbitrary sized graphs.
We show that our model is suitable for graph representation learning and graph generation.
arXiv Detail & Related papers (2022-11-07T06:17:21Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Learning Graphons via Structured Gromov-Wasserstein Barycenters [143.42601038462965]
We propose a novel and principled method to learn a nonparametric graph model called graphon.
The proposed approach overcomes drawbacks of prior state-of-the-art methods, and outperforms them on both synthetic and real-world data.
arXiv Detail & Related papers (2020-12-10T13:04:29Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Non-Parametric Graph Learning for Bayesian Graph Neural Networks [35.88239188555398]
We propose a novel non-parametric graph model for constructing the posterior distribution of graph adjacency matrices.
We demonstrate the advantages of this model in three different problem settings: node classification, link prediction and recommendation.
arXiv Detail & Related papers (2020-06-23T21:10:55Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Auto-decoding Graphs [91.3755431537592]
The generative model is an auto-decoder that learns to synthesize graphs from latent codes.
Graphs are synthesized using self-attention modules that are trained to identify likely connectivity patterns.
arXiv Detail & Related papers (2020-06-04T14:23:01Z) - Graph Deconvolutional Generation [3.5138314002170192]
We focus on the modern equivalent of the Erdos-Renyi random graph model: the graph variational autoencoder (GVAE)
GVAE has difficulty matching the training distribution and relies on an expensive graph matching procedure.
We improve this class of models by building a message passing neural network into GVAE's encoder and decoder.
arXiv Detail & Related papers (2020-02-14T04:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.