Exploiting Edge Features in Graphs with Fused Network Gromov-Wasserstein
Distance
- URL: http://arxiv.org/abs/2309.16604v1
- Date: Thu, 28 Sep 2023 17:05:03 GMT
- Title: Exploiting Edge Features in Graphs with Fused Network Gromov-Wasserstein
Distance
- Authors: Junjie Yang, Matthieu Labeau, Florence d'Alch\'e-Buc
- Abstract summary: We introduce an extension of Gromov-Wasserstein distance for comparing graphs whose both nodes and edges have features.
We empirically show the effectiveness of the novel distance in learning tasks where graphs occur in either input space or output space.
- Score: 18.522233517515975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pairwise comparison of graphs is key to many applications in Machine learning
ranging from clustering, kernel-based classification/regression and more
recently supervised graph prediction. Distances between graphs usually rely on
informative representations of these structured objects such as bag of
substructures or other graph embeddings. A recently popular solution consists
in representing graphs as metric measure spaces, allowing to successfully
leverage Optimal Transport, which provides meaningful distances allowing to
compare them: the Gromov-Wasserstein distances. However, this family of
distances overlooks edge attributes, which are essential for many structured
objects. In this work, we introduce an extension of Gromov-Wasserstein distance
for comparing graphs whose both nodes and edges have features. We propose novel
algorithms for distance and barycenter computation. We empirically show the
effectiveness of the novel distance in learning tasks where graphs occur in
either input space or output space, such as classification and graph
prediction.
Related papers
- Efficient Graph Field Integrators Meet Point Clouds [59.27295475120132]
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
The first class, SeparatorFactorization(SF), leverages the bounded genus of point cloud mesh graphs, while the second class, RFDiffusion(RFD), uses popular epsilon-nearest-neighbor graph representations for point clouds.
arXiv Detail & Related papers (2023-02-02T08:33:36Z) - Stars: Tera-Scale Graph Building for Clustering and Graph Learning [14.265732346610534]
$textitStars$ is a highly scalable method for building extremely sparse graphs via two-hop spanners.
We demonstrate that Stars builds a graph in nearly-linear time, where approximate nearest neighbors are contained within two-hop neighborhoods.
arXiv Detail & Related papers (2022-12-05T22:43:26Z) - On a linear fused Gromov-Wasserstein distance for graph structured data [2.360534864805446]
We propose a novel distance between two graphs, named linearFGW, defined as the Euclidean distance between their embeddings.
The advantages of the proposed distance are twofold: 1) it can take into account node feature and structure of graphs for measuring the similarity between graphs in a kernel-based framework, 2) it can be much faster for computing kernel matrix than pairwise OT-based distances.
arXiv Detail & Related papers (2022-03-09T13:43:18Z) - Semi-relaxed Gromov Wasserstein divergence with applications on graphs [10.394615068526505]
We show that a semi-relaxed Gromov-Wasserstein divergence can lead to an efficient graph dictionary learning algorithm.
We empirically demonstrate its relevance for complex tasks on graphs such as partitioning, clustering and completion.
arXiv Detail & Related papers (2021-10-06T13:38:31Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Online Graph Dictionary Learning [10.394615068526505]
We propose a new online Graph Dictionary Learning approach, which uses the Gromov Wasserstein divergence for the data fitting term.
In our work, graphs are encoded through their nodes' pairwise relations and modeled as convex combination of graph atoms.
Our approach naturally extends to labeled graphs, and is completed by a novel upper bound that can be used as a fast approximation of Gromov Wasserstein in the embedding space.
arXiv Detail & Related papers (2021-02-12T14:39:28Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Grale: Designing Networks for Graph Learning [68.23038997141381]
We present Grale, a scalable method we have developed to address the problem of graph design for graphs with billions of nodes.
Grale operates by fusing together different measures of(potentially weak) similarity to create a graph which exhibits high task-specific homophily between its nodes.
We have deployed Grale in more than 20 different industrial settings at Google, including datasets which have tens of billions of nodes, and hundreds of trillions of potential edges to score.
arXiv Detail & Related papers (2020-07-23T13:25:36Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Wasserstein Embedding for Graph Learning [33.90471037116372]
Wasserstein Embedding for Graph Learning (WEGL) is a framework for embedding entire graphs in a vector space.
We leverage new insights on defining similarity between graphs as a function of the similarity between their node embedding distributions.
We evaluate our new graph embedding approach on various benchmark graph-property prediction tasks.
arXiv Detail & Related papers (2020-06-16T18:23:00Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.