Learning Graphons via Structured Gromov-Wasserstein Barycenters
- URL: http://arxiv.org/abs/2012.05644v2
- Date: Thu, 17 Dec 2020 05:18:23 GMT
- Title: Learning Graphons via Structured Gromov-Wasserstein Barycenters
- Authors: Hongteng Xu, Dixin Luo, Lawrence Carin, Hongyuan Zha
- Abstract summary: We propose a novel and principled method to learn a nonparametric graph model called graphon.
The proposed approach overcomes drawbacks of prior state-of-the-art methods, and outperforms them on both synthetic and real-world data.
- Score: 143.42601038462965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel and principled method to learn a nonparametric graph model
called graphon, which is defined in an infinite-dimensional space and
represents arbitrary-size graphs. Based on the weak regularity lemma from the
theory of graphons, we leverage a step function to approximate a graphon. We
show that the cut distance of graphons can be relaxed to the Gromov-Wasserstein
distance of their step functions. Accordingly, given a set of graphs generated
by an underlying graphon, we learn the corresponding step function as the
Gromov-Wasserstein barycenter of the given graphs. Furthermore, we develop
several enhancements and extensions of the basic algorithm, $e.g.$, the
smoothed Gromov-Wasserstein barycenter for guaranteeing the continuity of the
learned graphons and the mixed Gromov-Wasserstein barycenters for learning
multiple structured graphons. The proposed approach overcomes drawbacks of
prior state-of-the-art methods, and outperforms them on both synthetic and
real-world data. The code is available at
https://github.com/HongtengXu/SGWB-Graphon.
Related papers
- Graph data augmentation with Gromow-Wasserstein Barycenters [0.0]
It has been proposed a novel augmentation strategy for graphs that operates in a non-Euclidean space.
A non-Euclidean distance, specifically the Gromow-Wasserstein distance, results in better approximations of the graphon.
This framework also provides a means to validate different graphon estimation approaches.
arXiv Detail & Related papers (2024-04-12T10:22:55Z) - A Gromov--Wasserstein Geometric View of Spectrum-Preserving Graph
Coarsening [19.09507367362567]
This work studies graph coarsening from a different perspective, developing a theory for preserving graph distances.
The geometric approach is useful when working with a collection of graphs, such as in graph classification and regression.
Minimizing this difference can be done using the popular weighted kernel $K$-means method.
arXiv Detail & Related papers (2023-06-15T04:47:26Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - G-Mixup: Graph Data Augmentation for Graph Classification [55.63157775049443]
Mixup has shown superiority in improving the generalization and robustness of neural networks by interpolating features and labels between two random samples.
We propose $mathcalG$-Mixup to augment graphs for graph classification by interpolating the generator (i.e., graphon) of different classes of graphs.
Experiments show that $mathcalG$-Mixup substantially improves the generalization and robustness of GNNs.
arXiv Detail & Related papers (2022-02-15T04:09:44Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Online Graph Dictionary Learning [10.394615068526505]
We propose a new online Graph Dictionary Learning approach, which uses the Gromov Wasserstein divergence for the data fitting term.
In our work, graphs are encoded through their nodes' pairwise relations and modeled as convex combination of graph atoms.
Our approach naturally extends to labeled graphs, and is completed by a novel upper bound that can be used as a fast approximation of Gromov Wasserstein in the embedding space.
arXiv Detail & Related papers (2021-02-12T14:39:28Z) - Partial Gromov-Wasserstein Learning for Partial Graph Matching [28.47269200800775]
A partial Gromov-Wasserstein learning framework is proposed for partially matching two graphs.
Our framework can improve the F1 score by at least $20%$ and often much more.
arXiv Detail & Related papers (2020-12-02T14:56:22Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.