A Regularized Wasserstein Framework for Graph Kernels
- URL: http://arxiv.org/abs/2110.02554v2
- Date: Fri, 8 Oct 2021 08:25:47 GMT
- Title: A Regularized Wasserstein Framework for Graph Kernels
- Authors: Asiri Wijesinghe, Qing Wang, and Stephen Gould
- Abstract summary: We propose a learning framework for graph kernels grounded on regularizing optimal transport.
This framework provides a novel optimal transport distance metric, namely Regularized Wasserstein (RW) discrepancy.
We have empirically validated our method using 12 datasets against 16 state-of-the-art baselines.
- Score: 32.558913310384476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a learning framework for graph kernels, which is theoretically
grounded on regularizing optimal transport. This framework provides a novel
optimal transport distance metric, namely Regularized Wasserstein (RW)
discrepancy, which can preserve both features and structure of graphs via
Wasserstein distances on features and their local variations, local barycenters
and global connectivity. Two strongly convex regularization terms are
introduced to improve the learning ability. One is to relax an optimal
alignment between graphs to be a cluster-to-cluster mapping between their
locally connected vertices, thereby preserving the local clustering structure
of graphs. The other is to take into account node degree distributions in order
to better preserve the global structure of graphs. We also design an efficient
algorithm to enable a fast approximation for solving the optimization problem.
Theoretically, our framework is robust and can guarantee the convergence and
numerical stability in optimization. We have empirically validated our method
using 12 datasets against 16 state-of-the-art baselines. The experimental
results show that our method consistently outperforms all state-of-the-art
methods on all benchmark databases for both graphs with discrete attributes and
graphs with continuous attributes.
Related papers
- Robust Graph Matching Using An Unbalanced Hierarchical Optimal Transport Framework [30.05543844763625]
We propose a novel and robust graph matching method based on an unbalanced hierarchical optimal transport framework.
We make the first attempt to exploit cross-modal alignment in graph matching.
Experiments on various graph matching tasks demonstrate the superiority and robustness of our method compared to state-of-the-art approaches.
arXiv Detail & Related papers (2023-10-18T16:16:53Z) - Optimality of Message-Passing Architectures for Sparse Graphs [13.96547777184641]
We study the node classification problem on feature-decorated graphs in the sparse setting, i.e., when the expected degree of a node is $O(1)$ in the number of nodes.
We introduce a notion of Bayes optimality for node classification tasks, called local Bayes optimality.
We show that the optimal message-passing architecture interpolates between a standard in the regime of low graph signal and a typical in the regime of high graph signal.
arXiv Detail & Related papers (2023-05-17T17:31:20Z) - Robust Attributed Graph Alignment via Joint Structure Learning and
Optimal Transport [26.58964162799207]
We propose SLOTAlign, an unsupervised graph alignment framework that jointly performs Structure Learning and Optimal Transport Alignment.
We incorporate multi-view structure learning to enhance graph representation power and reduce the effect of structure and feature inconsistency inherited across graphs.
The proposed SLOTAlign shows superior performance and strongest robustness over seven unsupervised graph alignment methods and five specialized KG alignment methods.
arXiv Detail & Related papers (2023-01-30T08:41:36Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Learning non-Gaussian graphical models via Hessian scores and triangular
transport [6.308539010172309]
We propose an algorithm for learning the Markov structure of continuous and non-Gaussian distributions.
Our algorithm SING estimates the density using a deterministic coupling, induced by a triangular transport map, and iteratively exploits sparse structure in the map to reveal sparsity in the graph.
arXiv Detail & Related papers (2021-01-08T16:42:42Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.