Implicit Graphon Neural Representation
- URL: http://arxiv.org/abs/2211.03329v3
- Date: Thu, 30 Mar 2023 18:05:53 GMT
- Title: Implicit Graphon Neural Representation
- Authors: Xinyue Xia, Gal Mishne, Yusu Wang
- Abstract summary: We propose to directly model graphons using neural networks, obtaining Implicit Graphon Neural Representation (IGNR)
Our IGNR has the benefit that it can represent graphons up to arbitrary resolutions, and enables natural and efficient generation of arbitrary sized graphs.
We show that our model is suitable for graph representation learning and graph generation.
- Score: 15.175103000392488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphons are general and powerful models for generating graphs of varying
size. In this paper, we propose to directly model graphons using neural
networks, obtaining Implicit Graphon Neural Representation (IGNR). Existing
work in modeling and reconstructing graphons often approximates a target
graphon by a fixed resolution piece-wise constant representation. Our IGNR has
the benefit that it can represent graphons up to arbitrary resolutions, and
enables natural and efficient generation of arbitrary sized graphs with desired
structure once the model is learned. Furthermore, we allow the input graph data
to be unaligned and have different sizes by leveraging the Gromov-Wasserstein
distance. We first demonstrate the effectiveness of our model by showing its
superior performance on a graphon learning task. We then propose an extension
of IGNR that can be incorporated into an auto-encoder framework, and
demonstrate its good performance under a more general setting of graphon
learning. We also show that our model is suitable for graph representation
learning and graph generation.
Related papers
- Scalable Implicit Graphon Learning [25.015678499211404]
We propose a scalable method that combines implicit neural representations (INRs) and graph neural networks (GNNs) to estimate a graphon from observed graphs.
We evaluate SIGL in synthetic and real-world graphs, showing that it outperforms existing methods and scales effectively to larger graphs.
arXiv Detail & Related papers (2024-10-22T22:44:24Z) - Graph data augmentation with Gromow-Wasserstein Barycenters [0.0]
It has been proposed a novel augmentation strategy for graphs that operates in a non-Euclidean space.
A non-Euclidean distance, specifically the Gromow-Wasserstein distance, results in better approximations of the graphon.
This framework also provides a means to validate different graphon estimation approaches.
arXiv Detail & Related papers (2024-04-12T10:22:55Z) - Fine-tuning Graph Neural Networks by Preserving Graph Generative
Patterns [13.378277755978258]
We show that the structural divergence between pre-training and downstream graphs significantly limits the transferability when using the vanilla fine-tuning strategy.
We propose G-Tuning to preserve the generative patterns of downstream graphs.
G-Tuning demonstrates an average improvement of 0.5% and 2.6% on in-domain and out-of-domain transfer learning experiments.
arXiv Detail & Related papers (2023-12-21T05:17:10Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.