Dynamical softassign and adaptive parameter tuning for graph matching
- URL: http://arxiv.org/abs/2208.08233v3
- Date: Sun, 24 Mar 2024 12:11:18 GMT
- Title: Dynamical softassign and adaptive parameter tuning for graph matching
- Authors: Binrui Shen, Qiang Niu, Shengxin Zhu,
- Abstract summary: We study a unified framework for graph matching problems called the constrained gradient algorithms.
Our contributed adaptive step size parameter can guarantee the underlying algorithms' convergence.
We propose a novel graph matching algorithm: the softassign constrained gradient method.
- Score: 0.7456521449098222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies a unified framework for graph matching problems called the constrained gradient method. Popular algorithms within this framework include graduated assignment (GA), integer projected fixed-point method (IPFP), and doubly stochastic projected fixed-point method (DSPFP). These algorithms differ from the step size parameter and constrained operator. Our contributed adaptive step size parameter can guarantee the underlying algorithms' convergence and enhance their efficiency and accuracy. A preliminary analysis suggests that the optimal step size parameter has a high probability of being 1 in fully connected graph matching. Secondly, we propose a dynamic strategy for softassign, a popular constrained operator, to address its sensitivity concerning nodes' cardinality and risk of overflow. Combining the adaptive step size parameter and the dynamical softassign, we propose a novel graph matching algorithm: the softassign constrained gradient method. Various experiments demonstrate that it is significantly faster than other state-of-the-art algorithms based on the constrained gradient method with improved accuracy.
Related papers
- Efficient Graph Similarity Computation with Alignment Regularization [7.143879014059894]
Graph similarity computation (GSC) is a learning-based prediction task using Graph Neural Networks (GNNs)
We show that high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg)
In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference.
arXiv Detail & Related papers (2024-06-21T07:37:28Z) - Differentiable Proximal Graph Matching [40.41380102260085]
We introduce an algorithm for graph matching based on the proximal operator, referred to as differentiable proximal graph matching (DPGM)
The whole algorithm can be considered as a differentiable map from the graph affinity matrix to the prediction of node correspondence.
Numerical experiments show that PGM outperforms existing graph matching algorithms on diverse datasets.
arXiv Detail & Related papers (2024-05-26T08:17:13Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - On the convergence of adaptive first order methods: proximal gradient and alternating minimization algorithms [4.307128674848627]
AdaPG$q,r$ is a framework that unifies and extends existing results by providing larger stepsize policies and improved lower bounds.
Different choices of the parameters $q$ and $r$ are discussed and the efficacy of the resulting methods is demonstrated through numerical simulations.
arXiv Detail & Related papers (2023-11-30T10:29:43Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - CBAG: An Efficient Genetic Algorithm for the Graph Burning Problem [0.0]
We propose an efficient genetic algorithm called Centrality BAsed Genetic-algorithm (CBAG) for solving the graph burning problem.
Considering the unique characteristics of the graph burning problem, we introduce novel chromosome representation, and evaluation method.
Based on the results, it can be seen that the proposed algorithm achieves better performance in comparison to the previous state-of-the-art centralitys.
arXiv Detail & Related papers (2022-08-01T17:34:07Z) - Bolstering Stochastic Gradient Descent with Model Building [0.0]
gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates.
We propose an alternative approach to line search by using a new algorithm based on forward step model building.
We show that the proposed algorithm achieves faster convergence and better generalization in well-known test problems.
arXiv Detail & Related papers (2021-11-13T06:54:36Z) - Graph Matching via Optimal Transport [11.93151370164898]
Solving the graph matching problem is increasingly important due to it's applications in operations research, computer vision, neuroscience, and more.
Current state-of-the-art algorithms are inefficient in matching very large graphs, though they produce good accuracy.
We present GOAT, a modification to the state-of-the-art graph matching approximation algorithm "FAQ" (Vogelstein, 2015), replacing its linear sum assignment step with the "Lightspeed Optimal Transport" method of Cuturi (2013).
arXiv Detail & Related papers (2021-11-09T19:18:18Z) - FGOT: Graph Distances based on Filters and Optimal Transport [62.779521543654134]
Graph comparison deals with identifying similarities and dissimilarities between graphs.
A major obstacle is the unknown alignment of graphs, as well as the lack of accurate and inexpensive comparison metrics.
In this work we introduce the filter graph distance approximation.
arXiv Detail & Related papers (2021-09-09T17:43:07Z) - Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of
Greedy Algorithm [20.582965700659788]
We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant discrete processes by their continuous counterparts.
We prove that, quite surprisingly, GREEDY can have better performance guarantees than RANKING, another celebrated algorithm for online matching.
arXiv Detail & Related papers (2021-07-02T12:18:19Z) - Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate
in Gradient Descent [20.47598828422897]
We propose textit-Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order descent methods.
Our approach modifies the objective function by adding a regularization term, and casts the joint process parameters.
arXiv Detail & Related papers (2021-04-12T13:13:34Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods [7.486132958737807]
We present an adaptive variance reduced method with an implicit approach for adaptivity.
We provide convergence guarantees for finite-sum minimization problems and show a faster convergence than SARAH can be achieved if local geometry permits.
This algorithm implicitly computes step-size and efficiently estimates local Lipschitz smoothness of functions.
arXiv Detail & Related papers (2021-02-19T01:17:15Z) - A Unified Analysis of First-Order Methods for Smooth Games via Integral
Quadratic Constraints [10.578409461429626]
In this work, we adapt the integral quadratic constraints theory to first-order methods for smooth and strongly-varying games and iteration.
We provide emphfor the first time a global convergence rate for the negative momentum method(NM) with an complexity $mathcalO(kappa1.5)$, which matches its known lower bound.
We show that it is impossible for an algorithm with one step of memory to achieve acceleration if it only queries the gradient once per batch.
arXiv Detail & Related papers (2020-09-23T20:02:00Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.