Fast and Provably Convergent Algorithms for Gromov-Wasserstein in Graph
Learning
- URL: http://arxiv.org/abs/2205.08115v1
- Date: Tue, 17 May 2022 06:26:54 GMT
- Title: Fast and Provably Convergent Algorithms for Gromov-Wasserstein in Graph
Learning
- Authors: Jiajin Li, Jianheng Tang, Lemin Kong, Huikang Liu, Jia Li, Anthony
Man-Cho So, Jose Blanchet
- Abstract summary: Two proposed algorithms, called Bregman Alternating Projected Gradient (BAPG) and hybrid Bregman Proximal Gradient (hBPG) are proven to be (linearly) convergent.
We provide comprehensive experiments to validate the effectiveness of our methods on a host of tasks, including graph alignment, graph partition, and shape matching.
- Score: 37.89640056739607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the design and analysis of a class of efficient
algorithms for computing the Gromov-Wasserstein (GW) distance tailored to
large-scale graph learning tasks. Armed with the Luo-Tseng error bound
condition~\cite{luo1992error}, two proposed algorithms, called Bregman
Alternating Projected Gradient (BAPG) and hybrid Bregman Proximal Gradient
(hBPG) are proven to be (linearly) convergent. Upon task-specific properties,
our analysis further provides novel theoretical insights to guide how to select
the best fit method. As a result, we are able to provide comprehensive
experiments to validate the effectiveness of our methods on a host of tasks,
including graph alignment, graph partition, and shape matching. In terms of
both wall-clock time and modeling performance, the proposed methods achieve
state-of-the-art results.
Related papers
- Fast Semi-supervised Learning on Large Graphs: An Improved Green-function Method [93.04936336359502]
In graph-based semi-supervised learning, the Green-function method is a classical method that works by computing the Green's function in the graph space.
We make a detailed analysis on it and propose a novel method from the perspective of optimization.
Unlike the original method, our improved method can also apply two accelerating techniques, Gaussian Elimination, and Anchored Graphs.
arXiv Detail & Related papers (2024-11-04T04:27:18Z) - Convergence Guarantees for the DeepWalk Embedding on Block Models [9.898607871253775]
We show how to use the DeepWalk algorithm on graphs obtained from the Block Model (SBM)
Despite being simplistic, the SBM has proved to be a classic model for analyzing algorithms on large graphs.
arXiv Detail & Related papers (2024-10-26T18:35:11Z) - Learning Heuristics for the Maximum Clique Enumeration Problem Using Low
Dimensional Representations [0.0]
We use a learning framework for a pruning process of the input graph towards reducing the clique of the maximum enumeration problem.
We study the role of using different vertex representations on the performance of this runtime method.
We observe that using local graph features in the classification process produce more accurate results when combined with a feature elimination process.
arXiv Detail & Related papers (2022-10-30T22:04:32Z) - CBAG: An Efficient Genetic Algorithm for the Graph Burning Problem [0.0]
We propose an efficient genetic algorithm called Centrality BAsed Genetic-algorithm (CBAG) for solving the graph burning problem.
Considering the unique characteristics of the graph burning problem, we introduce novel chromosome representation, and evaluation method.
Based on the results, it can be seen that the proposed algorithm achieves better performance in comparison to the previous state-of-the-art centralitys.
arXiv Detail & Related papers (2022-08-01T17:34:07Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Effective and efficient structure learning with pruning and model
averaging strategies [9.023722579074734]
This paper describes an approximate BN structure learning algorithm that combines two novel strategies with hill-climbing search.
The algorithm starts by pruning the search space graphs, where the pruning strategy can be viewed as an aggressive version of the pruning strategies.
It then performs model averaging in the hill-climbing search process and moves to the neighbouring graph that maximises the objective function.
arXiv Detail & Related papers (2021-12-01T10:35:34Z) - Graph Matching via Optimal Transport [11.93151370164898]
Solving the graph matching problem is increasingly important due to it's applications in operations research, computer vision, neuroscience, and more.
Current state-of-the-art algorithms are inefficient in matching very large graphs, though they produce good accuracy.
We present GOAT, a modification to the state-of-the-art graph matching approximation algorithm "FAQ" (Vogelstein, 2015), replacing its linear sum assignment step with the "Lightspeed Optimal Transport" method of Cuturi (2013).
arXiv Detail & Related papers (2021-11-09T19:18:18Z) - Combinatorial Learning of Graph Edit Distance via Dynamic Embedding [108.49014907941891]
This paper presents a hybrid approach by combing the interpretability of traditional search-based techniques for producing the edit path.
Inspired by dynamic programming, node-level embedding is designated in a dynamic reuse fashion and suboptimal branches are encouraged to be pruned.
Experimental results on different graph datasets show that our approach can remarkably ease the search process of A* without sacrificing much accuracy.
arXiv Detail & Related papers (2020-11-30T17:41:02Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.