Outlier-Robust Gromov-Wasserstein for Graph Data
- URL: http://arxiv.org/abs/2302.04610v2
- Date: Mon, 30 Oct 2023 08:58:51 GMT
- Title: Outlier-Robust Gromov-Wasserstein for Graph Data
- Authors: Lemin Kong, Jiajin Li, Jianheng Tang, Anthony Man-Cho So
- Abstract summary: We introduce a new and robust version of the Gromov-Wasserstein (GW) distance called RGW.
RGW features optimistically perturbed marginal constraints within a Kullback-Leibler divergence-based ambiguity set.
We demonstrate the effectiveness of RGW on real-world graph learning tasks, such as subgraph matching and partial shape correspondence.
- Score: 31.895380224961464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gromov-Wasserstein (GW) distance is a powerful tool for comparing and
aligning probability distributions supported on different metric spaces.
Recently, GW has become the main modeling technique for aligning heterogeneous
data for a wide range of graph learning tasks. However, the GW distance is
known to be highly sensitive to outliers, which can result in large
inaccuracies if the outliers are given the same weight as other samples in the
objective function. To mitigate this issue, we introduce a new and robust
version of the GW distance called RGW. RGW features optimistically perturbed
marginal constraints within a Kullback-Leibler divergence-based ambiguity set.
To make the benefits of RGW more accessible in practice, we develop a
computationally efficient and theoretically provable procedure using Bregman
proximal alternating linearized minimization algorithm. Through extensive
experimentation, we validate our theoretical results and demonstrate the
effectiveness of RGW on real-world graph learning tasks, such as subgraph
matching and partial shape correspondence.
Related papers
- Linear Partial Gromov-Wasserstein Embedding [8.23887869467319]
The Gromov Wasserstein (GW) problem has attracted growing interest in the machine learning and data science communities.
We propose the linear partial Gromov Wasserstein embedding, a linearized embedding technique for the PGW problem.
Similar to the linearization technique for the classical OT problem, we prove that LPGW defines a valid metric for metric measure spaces.
arXiv Detail & Related papers (2024-10-22T03:54:52Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Efficient Approximation of Gromov-Wasserstein Distance using Importance
Sparsification [34.865115235547286]
We propose a novel importance sparsification method, called Spar-GW, to approximate Gromov-Wasserstein distance efficiently.
We demonstrate that the proposed Spar-GW method is applicable to the GW distance with arbitrary ground cost.
In addition, this method can be extended to approximate the variants of GW distance, including the entropic GW distance, the fused GW distance, and the unbalanced GW distance.
arXiv Detail & Related papers (2022-05-26T18:30:40Z) - Orthogonal Gromov-Wasserstein Discrepancy with Efficient Lower Bound [11.086440815804226]
The Gromov-Wasserstein (GW) formulates a node graph between the structured data based on optimal transportation datasets.
We take inspiration from the connection with the assignment problem, and propose the Gromov-Wasserstein discrepancy as a surrogate of GW.
arXiv Detail & Related papers (2022-05-12T02:10:56Z) - Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters [2.169919643934826]
We formulate the problem as regression with the Fused Gromov-Wasserstein (FGW) loss.
We propose a predictive model relying on a FGW barycenter whose weights depend on inputs.
arXiv Detail & Related papers (2022-02-08T12:15:39Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - Quantized Gromov-Wasserstein [10.592277756185046]
Quantized Gromov Wasserstein (qGW) is a metric that treats parts as fundamental objects and fits into a hierarchy of theoretical upper bounds for the problem.
We develop an algorithm for approximating optimal GW matchings which yields algorithmic speedups and reductions in memory complexity.
We are able to go beyond outperforming state-of-the-art and apply GW matching at scales that are an order of magnitude larger than in the existing literature.
arXiv Detail & Related papers (2021-04-05T17:03:20Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.