Semi-relaxed Gromov Wasserstein divergence with applications on graphs
- URL: http://arxiv.org/abs/2110.02753v1
- Date: Wed, 6 Oct 2021 13:38:31 GMT
- Title: Semi-relaxed Gromov Wasserstein divergence with applications on graphs
- Authors: C\'edric Vincent-Cuaz, R\'emi Flamary, Marco Corneli, Titouan Vayer,
Nicolas Courty
- Abstract summary: We show that a semi-relaxed Gromov-Wasserstein divergence can lead to an efficient graph dictionary learning algorithm.
We empirically demonstrate its relevance for complex tasks on graphs such as partitioning, clustering and completion.
- Score: 10.394615068526505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Comparing structured objects such as graphs is a fundamental operation
involved in many learning tasks. To this end, the Gromov-Wasserstein (GW)
distance, based on Optimal Transport (OT), has proven to be successful in
handling the specific nature of the associated objects. More specifically,
through the nodes connectivity relations, GW operates on graphs, seen as
probability measures over specific spaces. At the core of OT is the idea of
conservation of mass, which imposes a coupling between all the nodes from the
two considered graphs. We argue in this paper that this property can be
detrimental for tasks such as graph dictionary or partition learning, and we
relax it by proposing a new semi-relaxed Gromov-Wasserstein divergence. Aside
from immediate computational benefits, we discuss its properties, and show that
it can lead to an efficient graph dictionary learning algorithm. We empirically
demonstrate its relevance for complex tasks on graphs such as partitioning,
clustering and completion.
Related papers
- MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Exploiting Edge Features in Graphs with Fused Network Gromov-Wasserstein
Distance [18.522233517515975]
We introduce an extension of Gromov-Wasserstein distance for comparing graphs whose both nodes and edges have features.
We empirically show the effectiveness of the novel distance in learning tasks where graphs occur in either input space or output space.
arXiv Detail & Related papers (2023-09-28T17:05:03Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Pooling with Maximum-Weight $k$-Independent Sets [12.251091325930837]
We introduce a graph coarsening mechanism based on the graph-theoretic concept of maximum-weight $k$-independent sets.
We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs.
arXiv Detail & Related papers (2022-08-06T14:12:47Z) - Collaborative likelihood-ratio estimation over graphs [55.98760097296213]
Graph-based Relative Unconstrained Least-squares Importance Fitting (GRULSIF)
We develop this idea in a concrete non-parametric method that we call Graph-based Relative Unconstrained Least-squares Importance Fitting (GRULSIF)
We derive convergence rates for our collaborative approach that highlights the role played by variables such as the number of available observations per node, the size of the graph, and how accurately the graph structure encodes the similarity between tasks.
arXiv Detail & Related papers (2022-05-28T15:37:03Z) - Online Graph Dictionary Learning [10.394615068526505]
We propose a new online Graph Dictionary Learning approach, which uses the Gromov Wasserstein divergence for the data fitting term.
In our work, graphs are encoded through their nodes' pairwise relations and modeled as convex combination of graph atoms.
Our approach naturally extends to labeled graphs, and is completed by a novel upper bound that can be used as a fast approximation of Gromov Wasserstein in the embedding space.
arXiv Detail & Related papers (2021-02-12T14:39:28Z) - Partial Gromov-Wasserstein Learning for Partial Graph Matching [28.47269200800775]
A partial Gromov-Wasserstein learning framework is proposed for partially matching two graphs.
Our framework can improve the F1 score by at least $20%$ and often much more.
arXiv Detail & Related papers (2020-12-02T14:56:22Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Wasserstein Embedding for Graph Learning [33.90471037116372]
Wasserstein Embedding for Graph Learning (WEGL) is a framework for embedding entire graphs in a vector space.
We leverage new insights on defining similarity between graphs as a function of the similarity between their node embedding distributions.
We evaluate our new graph embedding approach on various benchmark graph-property prediction tasks.
arXiv Detail & Related papers (2020-06-16T18:23:00Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.